00:00:00.001 Started by upstream project "autotest-nightly" build number 3791 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3171 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.305 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.305 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.501 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.513 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.527 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:05.527 > git config core.sparsecheckout # timeout=10 00:00:05.539 > git read-tree -mu HEAD # timeout=10 00:00:05.555 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:05.575 Commit message: "pool: fixes for VisualBuild class" 00:00:05.575 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:05.655 [Pipeline] Start of Pipeline 00:00:05.669 [Pipeline] library 00:00:05.671 Loading library shm_lib@master 00:00:05.672 Library shm_lib@master is cached. Copying from home. 00:00:05.687 [Pipeline] node 00:00:05.697 Running on WFP3 in /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:05.699 [Pipeline] { 00:00:05.712 [Pipeline] catchError 00:00:05.714 [Pipeline] { 00:00:05.728 [Pipeline] wrap 00:00:05.738 [Pipeline] { 00:00:05.743 [Pipeline] stage 00:00:05.745 [Pipeline] { (Prologue) 00:00:05.933 [Pipeline] sh 00:00:06.222 + logger -p user.info -t JENKINS-CI 00:00:06.245 [Pipeline] echo 00:00:06.247 Node: WFP3 00:00:06.258 [Pipeline] sh 00:00:06.558 [Pipeline] setCustomBuildProperty 00:00:06.569 [Pipeline] echo 00:00:06.570 Cleanup processes 00:00:06.574 [Pipeline] sh 00:00:06.852 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:06.852 3770486 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:06.864 [Pipeline] sh 00:00:07.145 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:07.145 ++ grep -v 'sudo pgrep' 00:00:07.145 ++ awk '{print $1}' 00:00:07.145 + sudo kill -9 00:00:07.145 + true 00:00:07.159 [Pipeline] cleanWs 00:00:07.168 [WS-CLEANUP] Deleting project workspace... 00:00:07.168 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.175 [WS-CLEANUP] done 00:00:07.180 [Pipeline] setCustomBuildProperty 00:00:07.195 [Pipeline] sh 00:00:07.482 + sudo git config --global --replace-all safe.directory '*' 00:00:07.544 [Pipeline] nodesByLabel 00:00:07.545 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.556 [Pipeline] httpRequest 00:00:07.560 HttpMethod: GET 00:00:07.560 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.564 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.580 Response Code: HTTP/1.1 200 OK 00:00:07.580 Success: Status code 200 is in the accepted range: 200,404 00:00:07.581 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:13.566 [Pipeline] sh 00:00:13.851 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:13.870 [Pipeline] httpRequest 00:00:13.875 HttpMethod: GET 00:00:13.876 URL: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:13.876 Sending request to url: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:13.897 Response Code: HTTP/1.1 200 OK 00:00:13.897 Success: Status code 200 is in the accepted range: 200,404 00:00:13.898 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:01:05.061 [Pipeline] sh 00:01:05.382 + tar --no-same-owner -xf spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:01:07.928 [Pipeline] sh 00:01:08.209 + git -C spdk log --oneline -n5 00:01:08.209 e55c9a812 vbdev_error: decrement error_num atomically 00:01:08.209 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:01:08.209 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:01:08.209 f470a0dc6 event: do not call reactor events from spdk_thread context 00:01:08.209 8d3fdcaba nvmf: cleanup maximum number of subsystem namespace remanent code 00:01:08.221 [Pipeline] } 00:01:08.238 [Pipeline] // stage 00:01:08.246 [Pipeline] stage 00:01:08.249 [Pipeline] { (Prepare) 00:01:08.266 [Pipeline] writeFile 00:01:08.283 [Pipeline] sh 00:01:08.565 + logger -p user.info -t JENKINS-CI 00:01:08.577 [Pipeline] sh 00:01:08.860 + logger -p user.info -t JENKINS-CI 00:01:08.871 [Pipeline] sh 00:01:09.154 + cat autorun-spdk.conf 00:01:09.154 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.154 SPDK_TEST_NVMF=1 00:01:09.154 SPDK_TEST_NVME_CLI=1 00:01:09.154 SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:09.154 SPDK_TEST_NVMF_NICS=e810 00:01:09.154 SPDK_RUN_UBSAN=1 00:01:09.154 NET_TYPE=phy 00:01:09.161 RUN_NIGHTLY=1 00:01:09.167 [Pipeline] readFile 00:01:09.192 [Pipeline] withEnv 00:01:09.195 [Pipeline] { 00:01:09.209 [Pipeline] sh 00:01:09.490 + set -ex 00:01:09.491 + [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf ]] 00:01:09.491 + source /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:09.491 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.491 ++ SPDK_TEST_NVMF=1 00:01:09.491 ++ SPDK_TEST_NVME_CLI=1 00:01:09.491 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:09.491 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.491 ++ SPDK_RUN_UBSAN=1 00:01:09.491 ++ NET_TYPE=phy 00:01:09.491 ++ RUN_NIGHTLY=1 00:01:09.491 + case $SPDK_TEST_NVMF_NICS in 00:01:09.491 + DRIVERS=ice 00:01:09.491 + [[ rdma == \r\d\m\a ]] 00:01:09.491 + DRIVERS+=' irdma' 00:01:09.491 + [[ -n ice irdma ]] 00:01:09.491 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.491 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.491 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:09.491 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.491 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.491 + true 00:01:09.491 + for D in $DRIVERS 00:01:09.491 + sudo modprobe ice 00:01:09.491 + for D in $DRIVERS 00:01:09.491 + sudo modprobe irdma 00:01:09.749 + exit 0 00:01:09.760 [Pipeline] } 00:01:09.780 [Pipeline] // withEnv 00:01:09.792 [Pipeline] } 00:01:09.808 [Pipeline] // stage 00:01:09.818 [Pipeline] catchError 00:01:09.820 [Pipeline] { 00:01:09.833 [Pipeline] timeout 00:01:09.833 Timeout set to expire in 40 min 00:01:09.834 [Pipeline] { 00:01:09.845 [Pipeline] stage 00:01:09.847 [Pipeline] { (Tests) 00:01:09.861 [Pipeline] sh 00:01:10.143 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:10.143 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:10.143 + DIR_ROOT=/var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:10.143 + [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest ]] 00:01:10.143 + DIR_SPDK=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:10.143 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:01:10.143 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk ]] 00:01:10.143 + [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:01:10.143 + mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:01:10.143 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:01:10.143 + [[ nvmf-cvl-phy-autotest == pkgdep-* ]] 00:01:10.143 + cd /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:01:10.143 + source /etc/os-release 00:01:10.143 ++ NAME='Fedora Linux' 00:01:10.143 ++ VERSION='38 (Cloud Edition)' 00:01:10.143 ++ ID=fedora 00:01:10.143 ++ VERSION_ID=38 00:01:10.144 ++ VERSION_CODENAME= 00:01:10.144 ++ PLATFORM_ID=platform:f38 00:01:10.144 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.144 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.144 ++ LOGO=fedora-logo-icon 00:01:10.144 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.144 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.144 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.144 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.144 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.144 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.144 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.144 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.144 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.144 ++ SUPPORT_END=2024-05-14 00:01:10.144 ++ VARIANT='Cloud Edition' 00:01:10.144 ++ VARIANT_ID=cloud 00:01:10.144 + uname -a 00:01:10.144 Linux spdk-wfp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:01:10.144 + sudo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:01:12.679 Hugepages 00:01:12.679 node hugesize free / total 00:01:12.679 node0 1048576kB 0 / 0 00:01:12.679 node0 2048kB 0 / 0 00:01:12.679 node1 1048576kB 0 / 0 00:01:12.679 node1 2048kB 0 / 0 00:01:12.679 00:01:12.679 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.679 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:12.679 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:12.679 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:12.938 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:01:12.938 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:12.938 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:12.938 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:12.938 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:12.938 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:12.938 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:12.938 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:12.938 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:12.938 + rm -f /tmp/spdk-ld-path 00:01:12.938 + source autorun-spdk.conf 00:01:12.938 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.938 ++ SPDK_TEST_NVMF=1 00:01:12.938 ++ SPDK_TEST_NVME_CLI=1 00:01:12.938 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:12.938 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.938 ++ SPDK_RUN_UBSAN=1 00:01:12.938 ++ NET_TYPE=phy 00:01:12.938 ++ RUN_NIGHTLY=1 00:01:12.938 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.938 + [[ -n '' ]] 00:01:12.938 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:12.938 + for M in /var/spdk/build-*-manifest.txt 00:01:12.938 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.938 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:12.938 + for M in /var/spdk/build-*-manifest.txt 00:01:12.938 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.938 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:01:12.938 ++ uname 00:01:12.938 + [[ Linux == \L\i\n\u\x ]] 00:01:12.938 + sudo dmesg -T 00:01:12.938 + sudo dmesg --clear 00:01:12.938 + dmesg_pid=3771570 00:01:12.938 + [[ Fedora Linux == FreeBSD ]] 00:01:12.938 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.938 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.938 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.938 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.938 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.938 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.938 + sudo dmesg -Tw 00:01:12.938 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\c\v\l\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.938 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.938 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.938 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.938 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.938 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.939 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.939 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.939 + spdk/autorun.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:01:12.939 Test configuration: 00:01:12.939 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.939 SPDK_TEST_NVMF=1 00:01:12.939 SPDK_TEST_NVME_CLI=1 00:01:12.939 SPDK_TEST_NVMF_TRANSPORT=rdma 00:01:12.939 SPDK_TEST_NVMF_NICS=e810 00:01:12.939 SPDK_RUN_UBSAN=1 00:01:12.939 NET_TYPE=phy 00:01:12.939 RUN_NIGHTLY=1 10:28:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:01:12.939 10:28:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:12.939 10:28:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:12.939 10:28:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:12.939 10:28:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.939 10:28:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.939 10:28:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.939 10:28:41 -- paths/export.sh@5 -- $ export PATH 00:01:12.939 10:28:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.939 10:28:41 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:01:13.198 10:28:41 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:13.198 10:28:41 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718008121.XXXXXX 00:01:13.198 10:28:41 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718008121.Q0tPb4 00:01:13.198 10:28:41 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:13.198 10:28:41 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:13.198 10:28:41 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/' 00:01:13.198 10:28:41 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.198 10:28:41 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.198 10:28:41 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:13.198 10:28:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:13.198 10:28:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.198 10:28:41 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:13.198 10:28:41 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:13.198 10:28:41 -- pm/common@17 -- $ local monitor 00:01:13.198 10:28:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.198 10:28:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.198 10:28:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.198 10:28:41 -- pm/common@21 -- $ date +%s 00:01:13.198 10:28:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.198 10:28:41 -- pm/common@21 -- $ date +%s 00:01:13.198 10:28:42 -- pm/common@25 -- $ sleep 1 00:01:13.198 10:28:42 -- pm/common@21 -- $ date +%s 00:01:13.198 10:28:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718008122 00:01:13.198 10:28:42 -- pm/common@21 -- $ date +%s 00:01:13.198 10:28:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718008122 00:01:13.198 10:28:42 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718008122 00:01:13.198 10:28:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718008122 00:01:13.198 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718008122_collect-vmstat.pm.log 00:01:13.198 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718008122_collect-cpu-load.pm.log 00:01:13.198 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718008122_collect-cpu-temp.pm.log 00:01:13.198 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718008122_collect-bmc-pm.bmc.pm.log 00:01:14.133 10:28:43 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:14.133 10:28:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.133 10:28:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.133 10:28:43 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:14.133 10:28:43 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.133 Mon Jun 10 08:28:43 AM UTC 2024 00:01:14.133 10:28:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.133 v24.09-pre-53-ge55c9a812 00:01:14.133 10:28:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.133 10:28:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.133 10:28:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.133 10:28:43 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:14.133 10:28:43 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:14.133 10:28:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.133 ************************************ 00:01:14.133 START TEST ubsan 00:01:14.133 ************************************ 00:01:14.133 10:28:43 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:14.133 using ubsan 00:01:14.133 00:01:14.133 real 0m0.000s 00:01:14.133 user 0m0.000s 00:01:14.133 sys 0m0.000s 00:01:14.133 10:28:43 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:14.133 10:28:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.133 ************************************ 00:01:14.133 END TEST ubsan 00:01:14.133 ************************************ 00:01:14.133 10:28:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.133 10:28:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.133 10:28:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.133 10:28:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.133 10:28:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.133 10:28:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.133 10:28:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.133 10:28:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.133 10:28:43 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:14.392 Using default SPDK env in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:01:14.392 Using default DPDK in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:01:14.651 Using 'verbs' RDMA provider 00:01:27.801 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:37.853 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:37.853 Creating mk/config.mk...done. 00:01:37.853 Creating mk/cc.flags.mk...done. 00:01:37.853 Type 'make' to build. 00:01:37.853 10:29:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:37.853 10:29:06 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:37.853 10:29:06 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:37.853 10:29:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.853 ************************************ 00:01:37.853 START TEST make 00:01:37.853 ************************************ 00:01:37.853 10:29:06 make -- common/autotest_common.sh@1124 -- $ make -j96 00:01:38.419 make[1]: Nothing to be done for 'all'. 00:01:46.548 The Meson build system 00:01:46.548 Version: 1.3.1 00:01:46.548 Source dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk 00:01:46.548 Build dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp 00:01:46.548 Build type: native build 00:01:46.548 Program cat found: YES (/usr/bin/cat) 00:01:46.548 Project name: DPDK 00:01:46.548 Project version: 24.03.0 00:01:46.548 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:46.548 C linker for the host machine: cc ld.bfd 2.39-16 00:01:46.548 Host machine cpu family: x86_64 00:01:46.548 Host machine cpu: x86_64 00:01:46.548 Message: ## Building in Developer Mode ## 00:01:46.548 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.548 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:46.548 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.548 Program python3 found: YES (/usr/bin/python3) 00:01:46.548 Program cat found: YES (/usr/bin/cat) 00:01:46.548 Compiler for C supports arguments -march=native: YES 00:01:46.548 Checking for size of "void *" : 8 00:01:46.548 Checking for size of "void *" : 8 (cached) 00:01:46.548 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:46.548 Library m found: YES 00:01:46.548 Library numa found: YES 00:01:46.548 Has header "numaif.h" : YES 00:01:46.548 Library fdt found: NO 00:01:46.548 Library execinfo found: NO 00:01:46.548 Has header "execinfo.h" : YES 00:01:46.548 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.548 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.548 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.548 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.548 Run-time dependency openssl found: YES 3.0.9 00:01:46.548 Run-time dependency libpcap found: YES 1.10.4 00:01:46.548 Has header "pcap.h" with dependency libpcap: YES 00:01:46.548 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.548 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.548 Compiler for C supports arguments -Wformat: YES 00:01:46.548 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.548 Compiler for C supports arguments -Wformat-security: NO 00:01:46.548 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.548 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.548 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.548 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.548 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.548 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.548 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.548 Compiler for C supports arguments -Wundef: YES 00:01:46.548 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.548 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.548 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.548 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.548 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.548 Program objdump found: YES (/usr/bin/objdump) 00:01:46.548 Compiler for C supports arguments -mavx512f: YES 00:01:46.548 Checking if "AVX512 checking" compiles: YES 00:01:46.548 Fetching value of define "__SSE4_2__" : 1 00:01:46.548 Fetching value of define "__AES__" : 1 00:01:46.548 Fetching value of define "__AVX__" : 1 00:01:46.548 Fetching value of define "__AVX2__" : 1 00:01:46.548 Fetching value of define "__AVX512BW__" : 1 00:01:46.548 Fetching value of define "__AVX512CD__" : 1 00:01:46.548 Fetching value of define "__AVX512DQ__" : 1 00:01:46.548 Fetching value of define "__AVX512F__" : 1 00:01:46.548 Fetching value of define "__AVX512VL__" : 1 00:01:46.548 Fetching value of define "__PCLMUL__" : 1 00:01:46.548 Fetching value of define "__RDRND__" : 1 00:01:46.548 Fetching value of define "__RDSEED__" : 1 00:01:46.548 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.548 Fetching value of define "__znver1__" : (undefined) 00:01:46.548 Fetching value of define "__znver2__" : (undefined) 00:01:46.548 Fetching value of define "__znver3__" : (undefined) 00:01:46.548 Fetching value of define "__znver4__" : (undefined) 00:01:46.548 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.548 Message: lib/log: Defining dependency "log" 00:01:46.548 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.548 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.548 Checking for function "getentropy" : NO 00:01:46.548 Message: lib/eal: Defining dependency "eal" 00:01:46.548 Message: lib/ring: Defining dependency "ring" 00:01:46.548 Message: lib/rcu: Defining dependency "rcu" 00:01:46.548 Message: lib/mempool: Defining dependency "mempool" 00:01:46.548 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.548 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.548 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.548 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.548 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.548 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.548 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:46.548 Compiler for C supports arguments -mpclmul: YES 00:01:46.548 Compiler for C supports arguments -maes: YES 00:01:46.548 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.548 Compiler for C supports arguments -mavx512bw: YES 00:01:46.548 Compiler for C supports arguments -mavx512dq: YES 00:01:46.548 Compiler for C supports arguments -mavx512vl: YES 00:01:46.548 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.548 Compiler for C supports arguments -mavx2: YES 00:01:46.548 Compiler for C supports arguments -mavx: YES 00:01:46.548 Message: lib/net: Defining dependency "net" 00:01:46.548 Message: lib/meter: Defining dependency "meter" 00:01:46.548 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.548 Message: lib/pci: Defining dependency "pci" 00:01:46.548 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.548 Message: lib/hash: Defining dependency "hash" 00:01:46.548 Message: lib/timer: Defining dependency "timer" 00:01:46.548 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.548 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.548 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.548 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.548 Message: lib/power: Defining dependency "power" 00:01:46.548 Message: lib/reorder: Defining dependency "reorder" 00:01:46.548 Message: lib/security: Defining dependency "security" 00:01:46.548 Has header "linux/userfaultfd.h" : YES 00:01:46.548 Has header "linux/vduse.h" : YES 00:01:46.548 Message: lib/vhost: Defining dependency "vhost" 00:01:46.548 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.548 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.548 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.548 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.548 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:46.548 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:46.548 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:46.548 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:46.548 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:46.548 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:46.548 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.548 Configuring doxy-api-html.conf using configuration 00:01:46.548 Configuring doxy-api-man.conf using configuration 00:01:46.548 Program mandb found: YES (/usr/bin/mandb) 00:01:46.548 Program sphinx-build found: NO 00:01:46.548 Configuring rte_build_config.h using configuration 00:01:46.548 Message: 00:01:46.548 ================= 00:01:46.548 Applications Enabled 00:01:46.548 ================= 00:01:46.548 00:01:46.549 apps: 00:01:46.549 00:01:46.549 00:01:46.549 Message: 00:01:46.549 ================= 00:01:46.549 Libraries Enabled 00:01:46.549 ================= 00:01:46.549 00:01:46.549 libs: 00:01:46.549 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.549 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:46.549 cryptodev, dmadev, power, reorder, security, vhost, 00:01:46.549 00:01:46.549 Message: 00:01:46.549 =============== 00:01:46.549 Drivers Enabled 00:01:46.549 =============== 00:01:46.549 00:01:46.549 common: 00:01:46.549 00:01:46.549 bus: 00:01:46.549 pci, vdev, 00:01:46.549 mempool: 00:01:46.549 ring, 00:01:46.549 dma: 00:01:46.549 00:01:46.549 net: 00:01:46.549 00:01:46.549 crypto: 00:01:46.549 00:01:46.549 compress: 00:01:46.549 00:01:46.549 vdpa: 00:01:46.549 00:01:46.549 00:01:46.549 Message: 00:01:46.549 ================= 00:01:46.549 Content Skipped 00:01:46.549 ================= 00:01:46.549 00:01:46.549 apps: 00:01:46.549 dumpcap: explicitly disabled via build config 00:01:46.549 graph: explicitly disabled via build config 00:01:46.549 pdump: explicitly disabled via build config 00:01:46.549 proc-info: explicitly disabled via build config 00:01:46.549 test-acl: explicitly disabled via build config 00:01:46.549 test-bbdev: explicitly disabled via build config 00:01:46.549 test-cmdline: explicitly disabled via build config 00:01:46.549 test-compress-perf: explicitly disabled via build config 00:01:46.549 test-crypto-perf: explicitly disabled via build config 00:01:46.549 test-dma-perf: explicitly disabled via build config 00:01:46.549 test-eventdev: explicitly disabled via build config 00:01:46.549 test-fib: explicitly disabled via build config 00:01:46.549 test-flow-perf: explicitly disabled via build config 00:01:46.549 test-gpudev: explicitly disabled via build config 00:01:46.549 test-mldev: explicitly disabled via build config 00:01:46.549 test-pipeline: explicitly disabled via build config 00:01:46.549 test-pmd: explicitly disabled via build config 00:01:46.549 test-regex: explicitly disabled via build config 00:01:46.549 test-sad: explicitly disabled via build config 00:01:46.549 test-security-perf: explicitly disabled via build config 00:01:46.549 00:01:46.549 libs: 00:01:46.549 argparse: explicitly disabled via build config 00:01:46.549 metrics: explicitly disabled via build config 00:01:46.549 acl: explicitly disabled via build config 00:01:46.549 bbdev: explicitly disabled via build config 00:01:46.549 bitratestats: explicitly disabled via build config 00:01:46.549 bpf: explicitly disabled via build config 00:01:46.549 cfgfile: explicitly disabled via build config 00:01:46.549 distributor: explicitly disabled via build config 00:01:46.549 efd: explicitly disabled via build config 00:01:46.549 eventdev: explicitly disabled via build config 00:01:46.549 dispatcher: explicitly disabled via build config 00:01:46.549 gpudev: explicitly disabled via build config 00:01:46.549 gro: explicitly disabled via build config 00:01:46.549 gso: explicitly disabled via build config 00:01:46.549 ip_frag: explicitly disabled via build config 00:01:46.549 jobstats: explicitly disabled via build config 00:01:46.549 latencystats: explicitly disabled via build config 00:01:46.549 lpm: explicitly disabled via build config 00:01:46.549 member: explicitly disabled via build config 00:01:46.549 pcapng: explicitly disabled via build config 00:01:46.549 rawdev: explicitly disabled via build config 00:01:46.549 regexdev: explicitly disabled via build config 00:01:46.549 mldev: explicitly disabled via build config 00:01:46.549 rib: explicitly disabled via build config 00:01:46.549 sched: explicitly disabled via build config 00:01:46.549 stack: explicitly disabled via build config 00:01:46.549 ipsec: explicitly disabled via build config 00:01:46.549 pdcp: explicitly disabled via build config 00:01:46.549 fib: explicitly disabled via build config 00:01:46.549 port: explicitly disabled via build config 00:01:46.549 pdump: explicitly disabled via build config 00:01:46.549 table: explicitly disabled via build config 00:01:46.549 pipeline: explicitly disabled via build config 00:01:46.549 graph: explicitly disabled via build config 00:01:46.549 node: explicitly disabled via build config 00:01:46.549 00:01:46.549 drivers: 00:01:46.549 common/cpt: not in enabled drivers build config 00:01:46.549 common/dpaax: not in enabled drivers build config 00:01:46.549 common/iavf: not in enabled drivers build config 00:01:46.549 common/idpf: not in enabled drivers build config 00:01:46.549 common/ionic: not in enabled drivers build config 00:01:46.549 common/mvep: not in enabled drivers build config 00:01:46.549 common/octeontx: not in enabled drivers build config 00:01:46.549 bus/auxiliary: not in enabled drivers build config 00:01:46.549 bus/cdx: not in enabled drivers build config 00:01:46.549 bus/dpaa: not in enabled drivers build config 00:01:46.549 bus/fslmc: not in enabled drivers build config 00:01:46.549 bus/ifpga: not in enabled drivers build config 00:01:46.549 bus/platform: not in enabled drivers build config 00:01:46.549 bus/uacce: not in enabled drivers build config 00:01:46.549 bus/vmbus: not in enabled drivers build config 00:01:46.549 common/cnxk: not in enabled drivers build config 00:01:46.549 common/mlx5: not in enabled drivers build config 00:01:46.549 common/nfp: not in enabled drivers build config 00:01:46.549 common/nitrox: not in enabled drivers build config 00:01:46.549 common/qat: not in enabled drivers build config 00:01:46.549 common/sfc_efx: not in enabled drivers build config 00:01:46.549 mempool/bucket: not in enabled drivers build config 00:01:46.549 mempool/cnxk: not in enabled drivers build config 00:01:46.549 mempool/dpaa: not in enabled drivers build config 00:01:46.549 mempool/dpaa2: not in enabled drivers build config 00:01:46.549 mempool/octeontx: not in enabled drivers build config 00:01:46.549 mempool/stack: not in enabled drivers build config 00:01:46.549 dma/cnxk: not in enabled drivers build config 00:01:46.549 dma/dpaa: not in enabled drivers build config 00:01:46.549 dma/dpaa2: not in enabled drivers build config 00:01:46.549 dma/hisilicon: not in enabled drivers build config 00:01:46.549 dma/idxd: not in enabled drivers build config 00:01:46.549 dma/ioat: not in enabled drivers build config 00:01:46.549 dma/skeleton: not in enabled drivers build config 00:01:46.549 net/af_packet: not in enabled drivers build config 00:01:46.549 net/af_xdp: not in enabled drivers build config 00:01:46.549 net/ark: not in enabled drivers build config 00:01:46.549 net/atlantic: not in enabled drivers build config 00:01:46.549 net/avp: not in enabled drivers build config 00:01:46.549 net/axgbe: not in enabled drivers build config 00:01:46.549 net/bnx2x: not in enabled drivers build config 00:01:46.549 net/bnxt: not in enabled drivers build config 00:01:46.549 net/bonding: not in enabled drivers build config 00:01:46.549 net/cnxk: not in enabled drivers build config 00:01:46.549 net/cpfl: not in enabled drivers build config 00:01:46.549 net/cxgbe: not in enabled drivers build config 00:01:46.549 net/dpaa: not in enabled drivers build config 00:01:46.549 net/dpaa2: not in enabled drivers build config 00:01:46.549 net/e1000: not in enabled drivers build config 00:01:46.549 net/ena: not in enabled drivers build config 00:01:46.549 net/enetc: not in enabled drivers build config 00:01:46.549 net/enetfec: not in enabled drivers build config 00:01:46.549 net/enic: not in enabled drivers build config 00:01:46.549 net/failsafe: not in enabled drivers build config 00:01:46.549 net/fm10k: not in enabled drivers build config 00:01:46.549 net/gve: not in enabled drivers build config 00:01:46.549 net/hinic: not in enabled drivers build config 00:01:46.549 net/hns3: not in enabled drivers build config 00:01:46.549 net/i40e: not in enabled drivers build config 00:01:46.549 net/iavf: not in enabled drivers build config 00:01:46.549 net/ice: not in enabled drivers build config 00:01:46.549 net/idpf: not in enabled drivers build config 00:01:46.549 net/igc: not in enabled drivers build config 00:01:46.549 net/ionic: not in enabled drivers build config 00:01:46.549 net/ipn3ke: not in enabled drivers build config 00:01:46.549 net/ixgbe: not in enabled drivers build config 00:01:46.549 net/mana: not in enabled drivers build config 00:01:46.549 net/memif: not in enabled drivers build config 00:01:46.549 net/mlx4: not in enabled drivers build config 00:01:46.549 net/mlx5: not in enabled drivers build config 00:01:46.549 net/mvneta: not in enabled drivers build config 00:01:46.549 net/mvpp2: not in enabled drivers build config 00:01:46.549 net/netvsc: not in enabled drivers build config 00:01:46.549 net/nfb: not in enabled drivers build config 00:01:46.549 net/nfp: not in enabled drivers build config 00:01:46.549 net/ngbe: not in enabled drivers build config 00:01:46.549 net/null: not in enabled drivers build config 00:01:46.549 net/octeontx: not in enabled drivers build config 00:01:46.549 net/octeon_ep: not in enabled drivers build config 00:01:46.549 net/pcap: not in enabled drivers build config 00:01:46.549 net/pfe: not in enabled drivers build config 00:01:46.549 net/qede: not in enabled drivers build config 00:01:46.549 net/ring: not in enabled drivers build config 00:01:46.549 net/sfc: not in enabled drivers build config 00:01:46.549 net/softnic: not in enabled drivers build config 00:01:46.549 net/tap: not in enabled drivers build config 00:01:46.549 net/thunderx: not in enabled drivers build config 00:01:46.549 net/txgbe: not in enabled drivers build config 00:01:46.549 net/vdev_netvsc: not in enabled drivers build config 00:01:46.549 net/vhost: not in enabled drivers build config 00:01:46.549 net/virtio: not in enabled drivers build config 00:01:46.549 net/vmxnet3: not in enabled drivers build config 00:01:46.549 raw/*: missing internal dependency, "rawdev" 00:01:46.549 crypto/armv8: not in enabled drivers build config 00:01:46.549 crypto/bcmfs: not in enabled drivers build config 00:01:46.549 crypto/caam_jr: not in enabled drivers build config 00:01:46.549 crypto/ccp: not in enabled drivers build config 00:01:46.549 crypto/cnxk: not in enabled drivers build config 00:01:46.549 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.549 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.549 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.549 crypto/mlx5: not in enabled drivers build config 00:01:46.549 crypto/mvsam: not in enabled drivers build config 00:01:46.549 crypto/nitrox: not in enabled drivers build config 00:01:46.549 crypto/null: not in enabled drivers build config 00:01:46.550 crypto/octeontx: not in enabled drivers build config 00:01:46.550 crypto/openssl: not in enabled drivers build config 00:01:46.550 crypto/scheduler: not in enabled drivers build config 00:01:46.550 crypto/uadk: not in enabled drivers build config 00:01:46.550 crypto/virtio: not in enabled drivers build config 00:01:46.550 compress/isal: not in enabled drivers build config 00:01:46.550 compress/mlx5: not in enabled drivers build config 00:01:46.550 compress/nitrox: not in enabled drivers build config 00:01:46.550 compress/octeontx: not in enabled drivers build config 00:01:46.550 compress/zlib: not in enabled drivers build config 00:01:46.550 regex/*: missing internal dependency, "regexdev" 00:01:46.550 ml/*: missing internal dependency, "mldev" 00:01:46.550 vdpa/ifc: not in enabled drivers build config 00:01:46.550 vdpa/mlx5: not in enabled drivers build config 00:01:46.550 vdpa/nfp: not in enabled drivers build config 00:01:46.550 vdpa/sfc: not in enabled drivers build config 00:01:46.550 event/*: missing internal dependency, "eventdev" 00:01:46.550 baseband/*: missing internal dependency, "bbdev" 00:01:46.550 gpu/*: missing internal dependency, "gpudev" 00:01:46.550 00:01:46.550 00:01:46.550 Build targets in project: 85 00:01:46.550 00:01:46.550 DPDK 24.03.0 00:01:46.550 00:01:46.550 User defined options 00:01:46.550 buildtype : debug 00:01:46.550 default_library : shared 00:01:46.550 libdir : lib 00:01:46.550 prefix : /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:01:46.550 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:46.550 c_link_args : 00:01:46.550 cpu_instruction_set: native 00:01:46.550 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:46.550 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:46.550 enable_docs : false 00:01:46.550 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.550 enable_kmods : false 00:01:46.550 tests : false 00:01:46.550 00:01:46.550 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.550 ninja: Entering directory `/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp' 00:01:46.550 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.550 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.550 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.550 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.550 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.550 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.550 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.550 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:46.550 [9/268] Linking static target lib/librte_kvargs.a 00:01:46.550 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.550 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.550 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.550 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.550 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.550 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.550 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.550 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.550 [18/268] Linking static target lib/librte_log.a 00:01:46.550 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.550 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.550 [21/268] Linking static target lib/librte_pci.a 00:01:46.813 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:46.813 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:46.814 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.814 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.814 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.814 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:46.814 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.072 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.072 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.072 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.072 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.072 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.072 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.072 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.072 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.072 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.072 [38/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.072 [39/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.072 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.072 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.072 [42/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.073 [43/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.073 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.073 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.073 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.073 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.073 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.073 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.073 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.073 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.073 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.073 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.073 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.073 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.073 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.073 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.073 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.073 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.073 [60/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.073 [61/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.073 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.073 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.073 [64/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.073 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.073 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.073 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:47.073 [68/268] Linking static target lib/librte_meter.a 00:01:47.073 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.073 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.073 [71/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.073 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.073 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.073 [74/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:47.073 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.073 [76/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.073 [77/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.073 [78/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.073 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.073 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.073 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.073 [82/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.073 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.073 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.073 [85/268] Linking static target lib/librte_ring.a 00:01:47.073 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.073 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.073 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.073 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.073 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.073 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.073 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.073 [93/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.073 [94/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.073 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.073 [96/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.073 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.073 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.073 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.073 [100/268] Linking static target lib/librte_telemetry.a 00:01:47.073 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.073 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.073 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.073 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.333 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.333 [106/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:47.333 [107/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.333 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.333 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.333 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.333 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.333 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.333 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.333 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.333 [115/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.333 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.333 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.333 [118/268] Linking static target lib/librte_net.a 00:01:47.333 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.333 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:47.333 [121/268] Linking static target lib/librte_mempool.a 00:01:47.333 [122/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:47.333 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.333 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.333 [125/268] Linking static target lib/librte_rcu.a 00:01:47.333 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.333 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.333 [128/268] Linking static target lib/librte_cmdline.a 00:01:47.333 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.333 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.333 [131/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:47.333 [132/268] Linking static target lib/librte_eal.a 00:01:47.333 [133/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.333 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:47.333 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:47.333 [136/268] Linking target lib/librte_log.so.24.1 00:01:47.333 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:47.333 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.333 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:47.333 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.333 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:47.333 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:47.333 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:47.592 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.592 [145/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:47.592 [146/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.592 [147/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:47.592 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.592 [149/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:47.592 [150/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:47.592 [151/268] Linking static target lib/librte_mbuf.a 00:01:47.592 [152/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.592 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.592 [154/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.592 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:47.592 [156/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.592 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:47.592 [158/268] Linking static target lib/librte_timer.a 00:01:47.592 [159/268] Linking static target lib/librte_dmadev.a 00:01:47.592 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.592 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.592 [162/268] Linking target lib/librte_kvargs.so.24.1 00:01:47.592 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:47.592 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.592 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.592 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:47.592 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.592 [168/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:47.592 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.592 [170/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.592 [171/268] Linking static target lib/librte_reorder.a 00:01:47.592 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:47.592 [173/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.592 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:47.592 [175/268] Linking target lib/librte_telemetry.so.24.1 00:01:47.592 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:47.592 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.592 [178/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:47.592 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:47.592 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:47.592 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:47.592 [182/268] Linking static target lib/librte_compressdev.a 00:01:47.592 [183/268] Linking static target lib/librte_power.a 00:01:47.592 [184/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:47.592 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:47.852 [186/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.852 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:47.852 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:47.852 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.852 [190/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.852 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.852 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:47.852 [193/268] Linking static target lib/librte_hash.a 00:01:47.852 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:47.852 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.852 [196/268] Linking static target drivers/librte_bus_vdev.a 00:01:47.852 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.852 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.852 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.852 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:47.852 [201/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.852 [202/268] Linking static target lib/librte_security.a 00:01:47.852 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.852 [204/268] Linking static target drivers/librte_bus_pci.a 00:01:47.852 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.852 [206/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.852 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:47.852 [208/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.852 [209/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.852 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.111 [211/268] Linking static target lib/librte_cryptodev.a 00:01:48.111 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.111 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.111 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.111 [215/268] Linking static target drivers/librte_mempool_ring.a 00:01:48.111 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.111 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.111 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.111 [219/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.369 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.369 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.369 [222/268] Linking static target lib/librte_ethdev.a 00:01:48.369 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.369 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:48.628 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.628 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.628 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.561 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:49.561 [229/268] Linking static target lib/librte_vhost.a 00:01:49.561 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.467 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.737 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.737 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.737 [234/268] Linking target lib/librte_eal.so.24.1 00:01:56.737 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:56.737 [236/268] Linking target lib/librte_pci.so.24.1 00:01:56.737 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:56.737 [238/268] Linking target lib/librte_ring.so.24.1 00:01:56.737 [239/268] Linking target lib/librte_timer.so.24.1 00:01:56.737 [240/268] Linking target lib/librte_meter.so.24.1 00:01:56.737 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:56.737 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:56.737 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:56.737 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:56.737 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:56.737 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:56.737 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:56.737 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:56.737 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:56.737 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:56.737 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:56.737 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:56.737 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:56.995 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:56.995 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:56.995 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:01:56.995 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:56.995 [258/268] Linking target lib/librte_net.so.24.1 00:01:57.254 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:57.254 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:57.254 [261/268] Linking target lib/librte_security.so.24.1 00:01:57.254 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:57.254 [263/268] Linking target lib/librte_hash.so.24.1 00:01:57.254 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:57.254 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:57.254 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:57.513 [267/268] Linking target lib/librte_power.so.24.1 00:01:57.514 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:57.514 INFO: autodetecting backend as ninja 00:01:57.514 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:58.486 CC lib/log/log_flags.o 00:01:58.486 CC lib/log/log_deprecated.o 00:01:58.486 CC lib/log/log.o 00:01:58.486 CC lib/ut_mock/mock.o 00:01:58.486 CC lib/ut/ut.o 00:01:58.486 LIB libspdk_log.a 00:01:58.486 LIB libspdk_ut_mock.a 00:01:58.486 LIB libspdk_ut.a 00:01:58.486 SO libspdk_log.so.7.0 00:01:58.486 SO libspdk_ut_mock.so.6.0 00:01:58.486 SO libspdk_ut.so.2.0 00:01:58.486 SYMLINK libspdk_log.so 00:01:58.486 SYMLINK libspdk_ut_mock.so 00:01:58.486 SYMLINK libspdk_ut.so 00:01:58.744 CXX lib/trace_parser/trace.o 00:01:58.745 CC lib/ioat/ioat.o 00:01:58.745 CC lib/dma/dma.o 00:01:58.745 CC lib/util/base64.o 00:01:59.002 CC lib/util/bit_array.o 00:01:59.002 CC lib/util/cpuset.o 00:01:59.002 CC lib/util/crc32.o 00:01:59.002 CC lib/util/crc16.o 00:01:59.002 CC lib/util/crc32c.o 00:01:59.002 CC lib/util/crc32_ieee.o 00:01:59.002 CC lib/util/crc64.o 00:01:59.002 CC lib/util/dif.o 00:01:59.002 CC lib/util/fd.o 00:01:59.002 CC lib/util/file.o 00:01:59.002 CC lib/util/hexlify.o 00:01:59.002 CC lib/util/iov.o 00:01:59.002 CC lib/util/math.o 00:01:59.002 CC lib/util/pipe.o 00:01:59.002 CC lib/util/strerror_tls.o 00:01:59.002 CC lib/util/string.o 00:01:59.002 CC lib/util/uuid.o 00:01:59.002 CC lib/util/fd_group.o 00:01:59.002 CC lib/util/xor.o 00:01:59.002 CC lib/util/zipf.o 00:01:59.002 CC lib/vfio_user/host/vfio_user_pci.o 00:01:59.002 CC lib/vfio_user/host/vfio_user.o 00:01:59.002 LIB libspdk_dma.a 00:01:59.002 SO libspdk_dma.so.4.0 00:01:59.002 LIB libspdk_ioat.a 00:01:59.261 SO libspdk_ioat.so.7.0 00:01:59.261 SYMLINK libspdk_dma.so 00:01:59.261 SYMLINK libspdk_ioat.so 00:01:59.261 LIB libspdk_vfio_user.a 00:01:59.261 SO libspdk_vfio_user.so.5.0 00:01:59.261 LIB libspdk_util.a 00:01:59.261 SYMLINK libspdk_vfio_user.so 00:01:59.261 SO libspdk_util.so.9.0 00:01:59.519 LIB libspdk_trace_parser.a 00:01:59.519 SYMLINK libspdk_util.so 00:01:59.519 SO libspdk_trace_parser.so.5.0 00:01:59.519 SYMLINK libspdk_trace_parser.so 00:01:59.778 CC lib/json/json_parse.o 00:01:59.778 CC lib/json/json_util.o 00:01:59.778 CC lib/json/json_write.o 00:01:59.778 CC lib/rdma/common.o 00:01:59.778 CC lib/rdma/rdma_verbs.o 00:01:59.778 CC lib/env_dpdk/env.o 00:01:59.778 CC lib/env_dpdk/pci.o 00:01:59.778 CC lib/env_dpdk/memory.o 00:01:59.778 CC lib/env_dpdk/init.o 00:01:59.778 CC lib/env_dpdk/threads.o 00:01:59.778 CC lib/env_dpdk/pci_ioat.o 00:01:59.778 CC lib/env_dpdk/pci_virtio.o 00:01:59.778 CC lib/env_dpdk/pci_vmd.o 00:01:59.778 CC lib/env_dpdk/pci_idxd.o 00:01:59.778 CC lib/env_dpdk/pci_event.o 00:01:59.778 CC lib/env_dpdk/sigbus_handler.o 00:01:59.778 CC lib/env_dpdk/pci_dpdk.o 00:01:59.778 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:59.778 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:59.778 CC lib/idxd/idxd.o 00:01:59.778 CC lib/idxd/idxd_user.o 00:01:59.778 CC lib/vmd/led.o 00:01:59.778 CC lib/idxd/idxd_kernel.o 00:01:59.778 CC lib/vmd/vmd.o 00:01:59.778 CC lib/conf/conf.o 00:02:00.036 LIB libspdk_conf.a 00:02:00.036 LIB libspdk_rdma.a 00:02:00.036 LIB libspdk_json.a 00:02:00.036 SO libspdk_conf.so.6.0 00:02:00.036 SO libspdk_rdma.so.6.0 00:02:00.036 SO libspdk_json.so.6.0 00:02:00.036 SYMLINK libspdk_conf.so 00:02:00.036 SYMLINK libspdk_rdma.so 00:02:00.036 SYMLINK libspdk_json.so 00:02:00.296 LIB libspdk_idxd.a 00:02:00.296 SO libspdk_idxd.so.12.0 00:02:00.296 LIB libspdk_vmd.a 00:02:00.296 SO libspdk_vmd.so.6.0 00:02:00.296 SYMLINK libspdk_idxd.so 00:02:00.296 SYMLINK libspdk_vmd.so 00:02:00.296 CC lib/jsonrpc/jsonrpc_server.o 00:02:00.296 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:00.296 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:00.296 CC lib/jsonrpc/jsonrpc_client.o 00:02:00.555 LIB libspdk_jsonrpc.a 00:02:00.555 SO libspdk_jsonrpc.so.6.0 00:02:00.813 SYMLINK libspdk_jsonrpc.so 00:02:00.813 LIB libspdk_env_dpdk.a 00:02:00.813 SO libspdk_env_dpdk.so.14.1 00:02:00.813 SYMLINK libspdk_env_dpdk.so 00:02:01.072 CC lib/rpc/rpc.o 00:02:01.072 LIB libspdk_rpc.a 00:02:01.072 SO libspdk_rpc.so.6.0 00:02:01.331 SYMLINK libspdk_rpc.so 00:02:01.645 CC lib/trace/trace.o 00:02:01.645 CC lib/trace/trace_flags.o 00:02:01.645 CC lib/trace/trace_rpc.o 00:02:01.645 CC lib/notify/notify.o 00:02:01.645 CC lib/notify/notify_rpc.o 00:02:01.645 CC lib/keyring/keyring_rpc.o 00:02:01.645 CC lib/keyring/keyring.o 00:02:01.645 LIB libspdk_notify.a 00:02:01.645 SO libspdk_notify.so.6.0 00:02:01.645 LIB libspdk_keyring.a 00:02:01.645 LIB libspdk_trace.a 00:02:01.904 SO libspdk_keyring.so.1.0 00:02:01.904 SO libspdk_trace.so.10.0 00:02:01.904 SYMLINK libspdk_notify.so 00:02:01.904 SYMLINK libspdk_keyring.so 00:02:01.904 SYMLINK libspdk_trace.so 00:02:02.163 CC lib/sock/sock.o 00:02:02.163 CC lib/sock/sock_rpc.o 00:02:02.163 CC lib/thread/thread.o 00:02:02.163 CC lib/thread/iobuf.o 00:02:02.423 LIB libspdk_sock.a 00:02:02.423 SO libspdk_sock.so.9.0 00:02:02.423 SYMLINK libspdk_sock.so 00:02:02.991 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:02.991 CC lib/nvme/nvme_ctrlr.o 00:02:02.991 CC lib/nvme/nvme_fabric.o 00:02:02.991 CC lib/nvme/nvme_ns_cmd.o 00:02:02.991 CC lib/nvme/nvme_ns.o 00:02:02.991 CC lib/nvme/nvme_pcie_common.o 00:02:02.991 CC lib/nvme/nvme_pcie.o 00:02:02.991 CC lib/nvme/nvme_qpair.o 00:02:02.991 CC lib/nvme/nvme.o 00:02:02.991 CC lib/nvme/nvme_quirks.o 00:02:02.991 CC lib/nvme/nvme_transport.o 00:02:02.991 CC lib/nvme/nvme_discovery.o 00:02:02.991 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:02.991 CC lib/nvme/nvme_tcp.o 00:02:02.991 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:02.991 CC lib/nvme/nvme_opal.o 00:02:02.991 CC lib/nvme/nvme_poll_group.o 00:02:02.991 CC lib/nvme/nvme_io_msg.o 00:02:02.991 CC lib/nvme/nvme_stubs.o 00:02:02.991 CC lib/nvme/nvme_zns.o 00:02:02.991 CC lib/nvme/nvme_auth.o 00:02:02.991 CC lib/nvme/nvme_cuse.o 00:02:02.991 CC lib/nvme/nvme_rdma.o 00:02:03.250 LIB libspdk_thread.a 00:02:03.250 SO libspdk_thread.so.10.0 00:02:03.250 SYMLINK libspdk_thread.so 00:02:03.509 CC lib/accel/accel.o 00:02:03.509 CC lib/accel/accel_rpc.o 00:02:03.509 CC lib/accel/accel_sw.o 00:02:03.509 CC lib/blob/blobstore.o 00:02:03.509 CC lib/blob/request.o 00:02:03.509 CC lib/blob/zeroes.o 00:02:03.509 CC lib/blob/blob_bs_dev.o 00:02:03.509 CC lib/init/subsystem.o 00:02:03.509 CC lib/init/subsystem_rpc.o 00:02:03.509 CC lib/init/json_config.o 00:02:03.509 CC lib/init/rpc.o 00:02:03.509 CC lib/virtio/virtio_vhost_user.o 00:02:03.509 CC lib/virtio/virtio.o 00:02:03.509 CC lib/virtio/virtio_pci.o 00:02:03.509 CC lib/virtio/virtio_vfio_user.o 00:02:03.767 LIB libspdk_init.a 00:02:03.767 SO libspdk_init.so.5.0 00:02:03.767 LIB libspdk_virtio.a 00:02:04.027 SO libspdk_virtio.so.7.0 00:02:04.027 SYMLINK libspdk_init.so 00:02:04.027 SYMLINK libspdk_virtio.so 00:02:04.285 CC lib/event/app.o 00:02:04.285 CC lib/event/reactor.o 00:02:04.285 CC lib/event/log_rpc.o 00:02:04.285 CC lib/event/app_rpc.o 00:02:04.285 CC lib/event/scheduler_static.o 00:02:04.285 LIB libspdk_accel.a 00:02:04.285 SO libspdk_accel.so.15.0 00:02:04.285 LIB libspdk_nvme.a 00:02:04.544 SYMLINK libspdk_accel.so 00:02:04.544 SO libspdk_nvme.so.13.0 00:02:04.544 LIB libspdk_event.a 00:02:04.544 SO libspdk_event.so.13.1 00:02:04.544 SYMLINK libspdk_event.so 00:02:04.802 CC lib/bdev/bdev_rpc.o 00:02:04.802 CC lib/bdev/bdev.o 00:02:04.802 CC lib/bdev/part.o 00:02:04.802 CC lib/bdev/bdev_zone.o 00:02:04.802 CC lib/bdev/scsi_nvme.o 00:02:04.802 SYMLINK libspdk_nvme.so 00:02:05.739 LIB libspdk_blob.a 00:02:05.739 SO libspdk_blob.so.11.0 00:02:05.739 SYMLINK libspdk_blob.so 00:02:05.998 CC lib/lvol/lvol.o 00:02:05.998 CC lib/blobfs/blobfs.o 00:02:05.998 CC lib/blobfs/tree.o 00:02:06.566 LIB libspdk_bdev.a 00:02:06.566 SO libspdk_bdev.so.15.0 00:02:06.566 LIB libspdk_blobfs.a 00:02:06.566 SYMLINK libspdk_bdev.so 00:02:06.566 SO libspdk_blobfs.so.10.0 00:02:06.566 LIB libspdk_lvol.a 00:02:06.566 SO libspdk_lvol.so.10.0 00:02:06.566 SYMLINK libspdk_blobfs.so 00:02:06.825 SYMLINK libspdk_lvol.so 00:02:06.825 CC lib/nbd/nbd.o 00:02:06.825 CC lib/nbd/nbd_rpc.o 00:02:06.825 CC lib/nvmf/ctrlr_discovery.o 00:02:06.825 CC lib/nvmf/ctrlr.o 00:02:06.825 CC lib/nvmf/nvmf.o 00:02:06.825 CC lib/nvmf/ctrlr_bdev.o 00:02:06.825 CC lib/nvmf/subsystem.o 00:02:06.825 CC lib/nvmf/nvmf_rpc.o 00:02:06.825 CC lib/nvmf/transport.o 00:02:06.825 CC lib/nvmf/tcp.o 00:02:06.825 CC lib/nvmf/rdma.o 00:02:06.825 CC lib/nvmf/stubs.o 00:02:06.825 CC lib/nvmf/mdns_server.o 00:02:06.825 CC lib/nvmf/auth.o 00:02:06.825 CC lib/scsi/dev.o 00:02:06.825 CC lib/scsi/lun.o 00:02:06.825 CC lib/scsi/port.o 00:02:06.825 CC lib/ftl/ftl_core.o 00:02:06.825 CC lib/scsi/scsi.o 00:02:06.825 CC lib/scsi/scsi_bdev.o 00:02:06.825 CC lib/ftl/ftl_init.o 00:02:06.825 CC lib/scsi/scsi_pr.o 00:02:06.825 CC lib/ublk/ublk.o 00:02:06.825 CC lib/scsi/scsi_rpc.o 00:02:06.825 CC lib/ftl/ftl_layout.o 00:02:06.825 CC lib/ftl/ftl_debug.o 00:02:06.825 CC lib/scsi/task.o 00:02:06.825 CC lib/ftl/ftl_io.o 00:02:06.825 CC lib/ublk/ublk_rpc.o 00:02:06.825 CC lib/ftl/ftl_sb.o 00:02:06.825 CC lib/ftl/ftl_l2p.o 00:02:06.825 CC lib/ftl/ftl_l2p_flat.o 00:02:06.825 CC lib/ftl/ftl_nv_cache.o 00:02:06.825 CC lib/ftl/ftl_band.o 00:02:06.825 CC lib/ftl/ftl_band_ops.o 00:02:06.825 CC lib/ftl/ftl_writer.o 00:02:06.825 CC lib/ftl/ftl_rq.o 00:02:06.825 CC lib/ftl/ftl_reloc.o 00:02:06.825 CC lib/ftl/ftl_l2p_cache.o 00:02:06.825 CC lib/ftl/ftl_p2l.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:06.825 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:06.825 CC lib/ftl/utils/ftl_md.o 00:02:06.825 CC lib/ftl/utils/ftl_conf.o 00:02:06.825 CC lib/ftl/utils/ftl_mempool.o 00:02:06.825 CC lib/ftl/utils/ftl_property.o 00:02:06.825 CC lib/ftl/utils/ftl_bitmap.o 00:02:06.825 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:06.825 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:06.825 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:06.825 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:06.825 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:06.825 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:06.825 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:06.825 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:06.825 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:06.825 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:06.825 CC lib/ftl/base/ftl_base_bdev.o 00:02:06.825 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:06.825 CC lib/ftl/base/ftl_base_dev.o 00:02:06.825 CC lib/ftl/ftl_trace.o 00:02:07.391 LIB libspdk_nbd.a 00:02:07.391 SO libspdk_nbd.so.7.0 00:02:07.391 LIB libspdk_ublk.a 00:02:07.649 LIB libspdk_scsi.a 00:02:07.649 SYMLINK libspdk_nbd.so 00:02:07.649 SO libspdk_ublk.so.3.0 00:02:07.649 SO libspdk_scsi.so.9.0 00:02:07.649 SYMLINK libspdk_ublk.so 00:02:07.649 SYMLINK libspdk_scsi.so 00:02:07.908 LIB libspdk_ftl.a 00:02:07.908 CC lib/iscsi/conn.o 00:02:07.908 CC lib/vhost/vhost.o 00:02:07.908 CC lib/iscsi/init_grp.o 00:02:07.908 CC lib/iscsi/iscsi.o 00:02:07.908 CC lib/iscsi/md5.o 00:02:07.908 CC lib/vhost/vhost_rpc.o 00:02:07.908 CC lib/iscsi/param.o 00:02:07.908 CC lib/iscsi/tgt_node.o 00:02:07.908 CC lib/vhost/vhost_scsi.o 00:02:07.908 CC lib/iscsi/iscsi_subsystem.o 00:02:07.908 CC lib/iscsi/portal_grp.o 00:02:07.908 CC lib/vhost/vhost_blk.o 00:02:07.908 CC lib/vhost/rte_vhost_user.o 00:02:07.908 CC lib/iscsi/iscsi_rpc.o 00:02:07.908 CC lib/iscsi/task.o 00:02:07.908 SO libspdk_ftl.so.9.0 00:02:08.475 SYMLINK libspdk_ftl.so 00:02:08.475 LIB libspdk_nvmf.a 00:02:08.475 SO libspdk_nvmf.so.18.1 00:02:08.733 SYMLINK libspdk_nvmf.so 00:02:08.734 LIB libspdk_vhost.a 00:02:08.734 SO libspdk_vhost.so.8.0 00:02:08.734 SYMLINK libspdk_vhost.so 00:02:08.992 LIB libspdk_iscsi.a 00:02:08.992 SO libspdk_iscsi.so.8.0 00:02:08.992 SYMLINK libspdk_iscsi.so 00:02:09.559 CC module/env_dpdk/env_dpdk_rpc.o 00:02:09.559 CC module/keyring/file/keyring.o 00:02:09.559 CC module/keyring/file/keyring_rpc.o 00:02:09.559 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:09.559 CC module/keyring/linux/keyring.o 00:02:09.559 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:09.559 CC module/keyring/linux/keyring_rpc.o 00:02:09.559 CC module/accel/ioat/accel_ioat.o 00:02:09.559 CC module/accel/ioat/accel_ioat_rpc.o 00:02:09.559 CC module/accel/dsa/accel_dsa_rpc.o 00:02:09.559 CC module/sock/posix/posix.o 00:02:09.559 CC module/accel/dsa/accel_dsa.o 00:02:09.559 CC module/scheduler/gscheduler/gscheduler.o 00:02:09.559 CC module/accel/iaa/accel_iaa.o 00:02:09.559 CC module/accel/iaa/accel_iaa_rpc.o 00:02:09.559 CC module/accel/error/accel_error.o 00:02:09.559 CC module/accel/error/accel_error_rpc.o 00:02:09.559 LIB libspdk_env_dpdk_rpc.a 00:02:09.817 CC module/blob/bdev/blob_bdev.o 00:02:09.817 SO libspdk_env_dpdk_rpc.so.6.0 00:02:09.817 LIB libspdk_keyring_file.a 00:02:09.817 SYMLINK libspdk_env_dpdk_rpc.so 00:02:09.817 SO libspdk_keyring_file.so.1.0 00:02:09.817 LIB libspdk_scheduler_gscheduler.a 00:02:09.817 LIB libspdk_keyring_linux.a 00:02:09.817 LIB libspdk_scheduler_dpdk_governor.a 00:02:09.817 LIB libspdk_scheduler_dynamic.a 00:02:09.817 SO libspdk_scheduler_gscheduler.so.4.0 00:02:09.817 SYMLINK libspdk_keyring_file.so 00:02:09.817 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:09.817 SO libspdk_keyring_linux.so.1.0 00:02:09.817 LIB libspdk_accel_error.a 00:02:09.817 LIB libspdk_accel_ioat.a 00:02:09.817 SO libspdk_scheduler_dynamic.so.4.0 00:02:09.817 LIB libspdk_accel_iaa.a 00:02:09.817 SO libspdk_accel_error.so.2.0 00:02:09.817 SO libspdk_accel_ioat.so.6.0 00:02:09.817 SYMLINK libspdk_scheduler_gscheduler.so 00:02:09.817 SYMLINK libspdk_keyring_linux.so 00:02:09.817 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:09.817 SO libspdk_accel_iaa.so.3.0 00:02:09.817 LIB libspdk_accel_dsa.a 00:02:09.817 SYMLINK libspdk_scheduler_dynamic.so 00:02:09.817 LIB libspdk_blob_bdev.a 00:02:10.075 SYMLINK libspdk_accel_error.so 00:02:10.075 SYMLINK libspdk_accel_ioat.so 00:02:10.075 SO libspdk_accel_dsa.so.5.0 00:02:10.075 SO libspdk_blob_bdev.so.11.0 00:02:10.075 SYMLINK libspdk_accel_iaa.so 00:02:10.075 SYMLINK libspdk_accel_dsa.so 00:02:10.075 SYMLINK libspdk_blob_bdev.so 00:02:10.333 LIB libspdk_sock_posix.a 00:02:10.333 SO libspdk_sock_posix.so.6.0 00:02:10.333 SYMLINK libspdk_sock_posix.so 00:02:10.333 CC module/blobfs/bdev/blobfs_bdev.o 00:02:10.333 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:10.333 CC module/bdev/delay/vbdev_delay.o 00:02:10.333 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:10.333 CC module/bdev/lvol/vbdev_lvol.o 00:02:10.333 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:10.333 CC module/bdev/error/vbdev_error_rpc.o 00:02:10.333 CC module/bdev/error/vbdev_error.o 00:02:10.333 CC module/bdev/gpt/vbdev_gpt.o 00:02:10.333 CC module/bdev/gpt/gpt.o 00:02:10.333 CC module/bdev/null/bdev_null.o 00:02:10.333 CC module/bdev/null/bdev_null_rpc.o 00:02:10.333 CC module/bdev/aio/bdev_aio_rpc.o 00:02:10.333 CC module/bdev/aio/bdev_aio.o 00:02:10.591 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:10.591 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:10.591 CC module/bdev/split/vbdev_split.o 00:02:10.591 CC module/bdev/malloc/bdev_malloc.o 00:02:10.591 CC module/bdev/nvme/bdev_nvme.o 00:02:10.591 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:10.591 CC module/bdev/split/vbdev_split_rpc.o 00:02:10.591 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:10.591 CC module/bdev/nvme/nvme_rpc.o 00:02:10.591 CC module/bdev/nvme/bdev_mdns_client.o 00:02:10.591 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:10.591 CC module/bdev/raid/bdev_raid.o 00:02:10.591 CC module/bdev/passthru/vbdev_passthru.o 00:02:10.591 CC module/bdev/iscsi/bdev_iscsi.o 00:02:10.591 CC module/bdev/raid/bdev_raid_rpc.o 00:02:10.591 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:10.591 CC module/bdev/nvme/vbdev_opal.o 00:02:10.591 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:10.591 CC module/bdev/raid/bdev_raid_sb.o 00:02:10.591 CC module/bdev/raid/raid0.o 00:02:10.591 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:10.591 CC module/bdev/raid/concat.o 00:02:10.591 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:10.591 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:10.591 CC module/bdev/raid/raid1.o 00:02:10.591 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:10.591 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:10.591 CC module/bdev/ftl/bdev_ftl.o 00:02:10.591 LIB libspdk_blobfs_bdev.a 00:02:10.849 SO libspdk_blobfs_bdev.so.6.0 00:02:10.849 LIB libspdk_bdev_split.a 00:02:10.849 LIB libspdk_bdev_gpt.a 00:02:10.849 LIB libspdk_bdev_null.a 00:02:10.849 LIB libspdk_bdev_error.a 00:02:10.849 SO libspdk_bdev_gpt.so.6.0 00:02:10.849 SO libspdk_bdev_split.so.6.0 00:02:10.849 SO libspdk_bdev_null.so.6.0 00:02:10.849 LIB libspdk_bdev_ftl.a 00:02:10.849 LIB libspdk_bdev_aio.a 00:02:10.849 SYMLINK libspdk_blobfs_bdev.so 00:02:10.849 LIB libspdk_bdev_passthru.a 00:02:10.849 SO libspdk_bdev_error.so.6.0 00:02:10.849 SO libspdk_bdev_aio.so.6.0 00:02:10.849 SO libspdk_bdev_ftl.so.6.0 00:02:10.849 SYMLINK libspdk_bdev_gpt.so 00:02:10.849 SO libspdk_bdev_passthru.so.6.0 00:02:10.849 SYMLINK libspdk_bdev_null.so 00:02:10.849 SYMLINK libspdk_bdev_split.so 00:02:10.849 LIB libspdk_bdev_malloc.a 00:02:10.849 SYMLINK libspdk_bdev_error.so 00:02:10.849 LIB libspdk_bdev_iscsi.a 00:02:10.849 LIB libspdk_bdev_delay.a 00:02:10.849 LIB libspdk_bdev_zone_block.a 00:02:10.849 SYMLINK libspdk_bdev_ftl.so 00:02:10.849 SO libspdk_bdev_delay.so.6.0 00:02:10.849 SYMLINK libspdk_bdev_aio.so 00:02:10.849 SO libspdk_bdev_iscsi.so.6.0 00:02:10.849 SO libspdk_bdev_malloc.so.6.0 00:02:10.849 SO libspdk_bdev_zone_block.so.6.0 00:02:10.849 SYMLINK libspdk_bdev_passthru.so 00:02:10.849 LIB libspdk_bdev_lvol.a 00:02:10.849 SYMLINK libspdk_bdev_delay.so 00:02:10.849 SYMLINK libspdk_bdev_zone_block.so 00:02:10.849 SYMLINK libspdk_bdev_iscsi.so 00:02:10.849 SYMLINK libspdk_bdev_malloc.so 00:02:10.849 SO libspdk_bdev_lvol.so.6.0 00:02:10.849 LIB libspdk_bdev_virtio.a 00:02:11.107 SO libspdk_bdev_virtio.so.6.0 00:02:11.107 SYMLINK libspdk_bdev_lvol.so 00:02:11.107 SYMLINK libspdk_bdev_virtio.so 00:02:11.366 LIB libspdk_bdev_raid.a 00:02:11.366 SO libspdk_bdev_raid.so.6.0 00:02:11.366 SYMLINK libspdk_bdev_raid.so 00:02:11.933 LIB libspdk_bdev_nvme.a 00:02:12.192 SO libspdk_bdev_nvme.so.7.0 00:02:12.192 SYMLINK libspdk_bdev_nvme.so 00:02:12.759 CC module/event/subsystems/scheduler/scheduler.o 00:02:12.759 CC module/event/subsystems/iobuf/iobuf.o 00:02:12.759 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:12.759 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:12.759 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:12.759 CC module/event/subsystems/vmd/vmd.o 00:02:12.759 CC module/event/subsystems/sock/sock.o 00:02:12.759 CC module/event/subsystems/keyring/keyring.o 00:02:12.759 LIB libspdk_event_scheduler.a 00:02:13.018 LIB libspdk_event_iobuf.a 00:02:13.018 LIB libspdk_event_vmd.a 00:02:13.018 LIB libspdk_event_vhost_blk.a 00:02:13.018 SO libspdk_event_scheduler.so.4.0 00:02:13.018 LIB libspdk_event_keyring.a 00:02:13.018 LIB libspdk_event_sock.a 00:02:13.018 SO libspdk_event_vmd.so.6.0 00:02:13.018 SO libspdk_event_keyring.so.1.0 00:02:13.018 SO libspdk_event_vhost_blk.so.3.0 00:02:13.018 SO libspdk_event_iobuf.so.3.0 00:02:13.018 SYMLINK libspdk_event_scheduler.so 00:02:13.018 SO libspdk_event_sock.so.5.0 00:02:13.018 SYMLINK libspdk_event_keyring.so 00:02:13.018 SYMLINK libspdk_event_vhost_blk.so 00:02:13.018 SYMLINK libspdk_event_vmd.so 00:02:13.018 SYMLINK libspdk_event_iobuf.so 00:02:13.018 SYMLINK libspdk_event_sock.so 00:02:13.280 CC module/event/subsystems/accel/accel.o 00:02:13.542 LIB libspdk_event_accel.a 00:02:13.542 SO libspdk_event_accel.so.6.0 00:02:13.542 SYMLINK libspdk_event_accel.so 00:02:13.800 CC module/event/subsystems/bdev/bdev.o 00:02:14.058 LIB libspdk_event_bdev.a 00:02:14.058 SO libspdk_event_bdev.so.6.0 00:02:14.058 SYMLINK libspdk_event_bdev.so 00:02:14.317 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:14.317 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:14.317 CC module/event/subsystems/scsi/scsi.o 00:02:14.317 CC module/event/subsystems/nbd/nbd.o 00:02:14.317 CC module/event/subsystems/ublk/ublk.o 00:02:14.576 LIB libspdk_event_scsi.a 00:02:14.577 LIB libspdk_event_nbd.a 00:02:14.577 LIB libspdk_event_ublk.a 00:02:14.577 SO libspdk_event_scsi.so.6.0 00:02:14.577 SO libspdk_event_nbd.so.6.0 00:02:14.577 LIB libspdk_event_nvmf.a 00:02:14.577 SO libspdk_event_ublk.so.3.0 00:02:14.577 SO libspdk_event_nvmf.so.6.0 00:02:14.577 SYMLINK libspdk_event_scsi.so 00:02:14.577 SYMLINK libspdk_event_nbd.so 00:02:14.577 SYMLINK libspdk_event_ublk.so 00:02:14.577 SYMLINK libspdk_event_nvmf.so 00:02:14.836 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:14.836 CC module/event/subsystems/iscsi/iscsi.o 00:02:15.094 LIB libspdk_event_vhost_scsi.a 00:02:15.095 LIB libspdk_event_iscsi.a 00:02:15.095 SO libspdk_event_vhost_scsi.so.3.0 00:02:15.095 SO libspdk_event_iscsi.so.6.0 00:02:15.095 SYMLINK libspdk_event_vhost_scsi.so 00:02:15.095 SYMLINK libspdk_event_iscsi.so 00:02:15.353 SO libspdk.so.6.0 00:02:15.353 SYMLINK libspdk.so 00:02:15.621 CC test/rpc_client/rpc_client_test.o 00:02:15.621 TEST_HEADER include/spdk/accel.h 00:02:15.621 TEST_HEADER include/spdk/assert.h 00:02:15.621 TEST_HEADER include/spdk/accel_module.h 00:02:15.621 CXX app/trace/trace.o 00:02:15.621 TEST_HEADER include/spdk/barrier.h 00:02:15.621 TEST_HEADER include/spdk/base64.h 00:02:15.621 TEST_HEADER include/spdk/bdev.h 00:02:15.621 CC app/spdk_top/spdk_top.o 00:02:15.621 TEST_HEADER include/spdk/bdev_module.h 00:02:15.621 CC app/trace_record/trace_record.o 00:02:15.621 TEST_HEADER include/spdk/bdev_zone.h 00:02:15.621 TEST_HEADER include/spdk/bit_array.h 00:02:15.621 TEST_HEADER include/spdk/bit_pool.h 00:02:15.621 CC app/spdk_nvme_discover/discovery_aer.o 00:02:15.621 CC app/spdk_nvme_identify/identify.o 00:02:15.621 TEST_HEADER include/spdk/blob_bdev.h 00:02:15.621 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:15.621 TEST_HEADER include/spdk/blobfs.h 00:02:15.621 TEST_HEADER include/spdk/blob.h 00:02:15.621 TEST_HEADER include/spdk/conf.h 00:02:15.621 TEST_HEADER include/spdk/config.h 00:02:15.621 TEST_HEADER include/spdk/crc16.h 00:02:15.621 TEST_HEADER include/spdk/cpuset.h 00:02:15.621 TEST_HEADER include/spdk/crc32.h 00:02:15.621 TEST_HEADER include/spdk/crc64.h 00:02:15.621 TEST_HEADER include/spdk/dif.h 00:02:15.621 CC app/spdk_nvme_perf/perf.o 00:02:15.621 TEST_HEADER include/spdk/dma.h 00:02:15.621 TEST_HEADER include/spdk/endian.h 00:02:15.621 CC app/spdk_lspci/spdk_lspci.o 00:02:15.621 TEST_HEADER include/spdk/env_dpdk.h 00:02:15.621 TEST_HEADER include/spdk/env.h 00:02:15.621 TEST_HEADER include/spdk/event.h 00:02:15.621 TEST_HEADER include/spdk/fd_group.h 00:02:15.621 TEST_HEADER include/spdk/fd.h 00:02:15.621 TEST_HEADER include/spdk/ftl.h 00:02:15.621 TEST_HEADER include/spdk/file.h 00:02:15.621 TEST_HEADER include/spdk/gpt_spec.h 00:02:15.621 TEST_HEADER include/spdk/hexlify.h 00:02:15.621 TEST_HEADER include/spdk/histogram_data.h 00:02:15.621 TEST_HEADER include/spdk/idxd.h 00:02:15.621 TEST_HEADER include/spdk/idxd_spec.h 00:02:15.621 TEST_HEADER include/spdk/init.h 00:02:15.621 TEST_HEADER include/spdk/ioat.h 00:02:15.621 TEST_HEADER include/spdk/ioat_spec.h 00:02:15.621 TEST_HEADER include/spdk/iscsi_spec.h 00:02:15.621 TEST_HEADER include/spdk/json.h 00:02:15.621 TEST_HEADER include/spdk/jsonrpc.h 00:02:15.621 TEST_HEADER include/spdk/keyring.h 00:02:15.621 TEST_HEADER include/spdk/keyring_module.h 00:02:15.621 TEST_HEADER include/spdk/likely.h 00:02:15.621 TEST_HEADER include/spdk/log.h 00:02:15.621 TEST_HEADER include/spdk/memory.h 00:02:15.621 TEST_HEADER include/spdk/lvol.h 00:02:15.621 CC app/spdk_dd/spdk_dd.o 00:02:15.621 TEST_HEADER include/spdk/mmio.h 00:02:15.622 TEST_HEADER include/spdk/nbd.h 00:02:15.622 TEST_HEADER include/spdk/nvme.h 00:02:15.622 TEST_HEADER include/spdk/notify.h 00:02:15.622 TEST_HEADER include/spdk/nvme_intel.h 00:02:15.622 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:15.622 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:15.622 TEST_HEADER include/spdk/nvme_spec.h 00:02:15.622 TEST_HEADER include/spdk/nvme_zns.h 00:02:15.622 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:15.622 TEST_HEADER include/spdk/nvmf.h 00:02:15.622 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:15.622 CC app/nvmf_tgt/nvmf_main.o 00:02:15.622 TEST_HEADER include/spdk/nvmf_spec.h 00:02:15.622 TEST_HEADER include/spdk/opal.h 00:02:15.622 TEST_HEADER include/spdk/nvmf_transport.h 00:02:15.622 TEST_HEADER include/spdk/pci_ids.h 00:02:15.622 TEST_HEADER include/spdk/opal_spec.h 00:02:15.622 TEST_HEADER include/spdk/pipe.h 00:02:15.622 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:15.622 TEST_HEADER include/spdk/queue.h 00:02:15.622 TEST_HEADER include/spdk/reduce.h 00:02:15.622 TEST_HEADER include/spdk/scheduler.h 00:02:15.622 TEST_HEADER include/spdk/rpc.h 00:02:15.622 TEST_HEADER include/spdk/scsi.h 00:02:15.622 TEST_HEADER include/spdk/scsi_spec.h 00:02:15.622 TEST_HEADER include/spdk/sock.h 00:02:15.622 TEST_HEADER include/spdk/stdinc.h 00:02:15.622 TEST_HEADER include/spdk/thread.h 00:02:15.622 CC app/vhost/vhost.o 00:02:15.622 TEST_HEADER include/spdk/trace.h 00:02:15.622 TEST_HEADER include/spdk/string.h 00:02:15.622 TEST_HEADER include/spdk/tree.h 00:02:15.622 CC app/spdk_tgt/spdk_tgt.o 00:02:15.622 TEST_HEADER include/spdk/trace_parser.h 00:02:15.622 CC app/iscsi_tgt/iscsi_tgt.o 00:02:15.622 TEST_HEADER include/spdk/ublk.h 00:02:15.622 TEST_HEADER include/spdk/util.h 00:02:15.622 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:15.622 TEST_HEADER include/spdk/uuid.h 00:02:15.622 TEST_HEADER include/spdk/version.h 00:02:15.622 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:15.622 TEST_HEADER include/spdk/vhost.h 00:02:15.622 TEST_HEADER include/spdk/vmd.h 00:02:15.622 TEST_HEADER include/spdk/xor.h 00:02:15.622 TEST_HEADER include/spdk/zipf.h 00:02:15.622 CXX test/cpp_headers/accel.o 00:02:15.622 CXX test/cpp_headers/accel_module.o 00:02:15.622 CXX test/cpp_headers/assert.o 00:02:15.622 CXX test/cpp_headers/barrier.o 00:02:15.622 CXX test/cpp_headers/base64.o 00:02:15.622 CXX test/cpp_headers/bdev_module.o 00:02:15.622 CXX test/cpp_headers/bdev.o 00:02:15.622 CXX test/cpp_headers/bit_pool.o 00:02:15.622 CXX test/cpp_headers/bdev_zone.o 00:02:15.622 CXX test/cpp_headers/bit_array.o 00:02:15.622 CXX test/cpp_headers/blob_bdev.o 00:02:15.622 CXX test/cpp_headers/blobfs_bdev.o 00:02:15.622 CXX test/cpp_headers/blobfs.o 00:02:15.622 CXX test/cpp_headers/blob.o 00:02:15.622 CXX test/cpp_headers/conf.o 00:02:15.622 CXX test/cpp_headers/cpuset.o 00:02:15.622 CXX test/cpp_headers/crc16.o 00:02:15.622 CXX test/cpp_headers/config.o 00:02:15.622 CXX test/cpp_headers/crc64.o 00:02:15.622 CXX test/cpp_headers/crc32.o 00:02:15.622 CXX test/cpp_headers/dif.o 00:02:15.886 CC test/nvme/aer/aer.o 00:02:15.886 CXX test/cpp_headers/dma.o 00:02:15.886 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:15.886 CC test/nvme/e2edp/nvme_dp.o 00:02:15.886 CC test/app/jsoncat/jsoncat.o 00:02:15.886 CC test/nvme/startup/startup.o 00:02:15.886 CC test/event/event_perf/event_perf.o 00:02:15.886 CC examples/nvme/hotplug/hotplug.o 00:02:15.886 CC examples/nvme/reconnect/reconnect.o 00:02:15.886 CC test/nvme/sgl/sgl.o 00:02:15.886 CC test/event/reactor_perf/reactor_perf.o 00:02:15.886 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:15.886 CC test/event/reactor/reactor.o 00:02:15.886 CC test/env/pci/pci_ut.o 00:02:15.886 CC test/app/histogram_perf/histogram_perf.o 00:02:15.886 CC test/nvme/overhead/overhead.o 00:02:15.886 CC examples/vmd/led/led.o 00:02:15.886 CC app/fio/nvme/fio_plugin.o 00:02:15.886 CC examples/nvme/arbitration/arbitration.o 00:02:15.886 CC test/nvme/boot_partition/boot_partition.o 00:02:15.886 CC test/app/stub/stub.o 00:02:15.886 CC test/nvme/compliance/nvme_compliance.o 00:02:15.886 CC examples/vmd/lsvmd/lsvmd.o 00:02:15.886 CC test/thread/poller_perf/poller_perf.o 00:02:15.886 CC test/env/vtophys/vtophys.o 00:02:15.886 CC examples/nvme/abort/abort.o 00:02:15.886 CC examples/nvme/hello_world/hello_world.o 00:02:15.886 CC test/bdev/bdevio/bdevio.o 00:02:15.886 CC examples/ioat/verify/verify.o 00:02:15.886 CC test/nvme/reset/reset.o 00:02:15.886 CC test/nvme/connect_stress/connect_stress.o 00:02:15.886 CC test/env/memory/memory_ut.o 00:02:15.886 CC examples/sock/hello_world/hello_sock.o 00:02:15.886 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:15.886 CC test/nvme/err_injection/err_injection.o 00:02:15.886 CC examples/idxd/perf/perf.o 00:02:15.886 CC test/nvme/simple_copy/simple_copy.o 00:02:15.886 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.886 CC examples/accel/perf/accel_perf.o 00:02:15.886 CC test/nvme/reserve/reserve.o 00:02:15.886 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:15.886 CC test/event/app_repeat/app_repeat.o 00:02:15.886 CC test/nvme/fused_ordering/fused_ordering.o 00:02:15.886 CC examples/ioat/perf/perf.o 00:02:15.886 CC test/nvme/cuse/cuse.o 00:02:15.886 CC examples/util/zipf/zipf.o 00:02:15.886 CC test/dma/test_dma/test_dma.o 00:02:15.886 CC test/nvme/fdp/fdp.o 00:02:15.886 CC examples/bdev/hello_world/hello_bdev.o 00:02:15.886 CC test/event/scheduler/scheduler.o 00:02:15.886 CC test/accel/dif/dif.o 00:02:15.886 CC test/blobfs/mkfs/mkfs.o 00:02:15.886 CC examples/thread/thread/thread_ex.o 00:02:15.886 CC app/fio/bdev/fio_plugin.o 00:02:15.887 CC test/app/bdev_svc/bdev_svc.o 00:02:15.887 CC examples/nvmf/nvmf/nvmf.o 00:02:15.887 CC examples/bdev/bdevperf/bdevperf.o 00:02:15.887 CC examples/blob/cli/blobcli.o 00:02:15.887 LINK spdk_lspci 00:02:15.887 CC examples/blob/hello_world/hello_blob.o 00:02:16.148 LINK spdk_nvme_discover 00:02:16.148 CC test/env/mem_callbacks/mem_callbacks.o 00:02:16.148 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:16.148 CC test/lvol/esnap/esnap.o 00:02:16.148 LINK nvmf_tgt 00:02:16.148 LINK rpc_client_test 00:02:16.148 LINK spdk_tgt 00:02:16.148 LINK event_perf 00:02:16.148 LINK vtophys 00:02:16.148 LINK interrupt_tgt 00:02:16.148 LINK vhost 00:02:16.148 LINK histogram_perf 00:02:16.148 LINK reactor_perf 00:02:16.148 LINK led 00:02:16.409 LINK boot_partition 00:02:16.409 CXX test/cpp_headers/endian.o 00:02:16.409 LINK spdk_trace_record 00:02:16.409 LINK iscsi_tgt 00:02:16.409 LINK zipf 00:02:16.409 CXX test/cpp_headers/env_dpdk.o 00:02:16.409 CXX test/cpp_headers/env.o 00:02:16.409 LINK jsoncat 00:02:16.409 LINK reactor 00:02:16.409 CXX test/cpp_headers/event.o 00:02:16.409 CXX test/cpp_headers/fd_group.o 00:02:16.409 CXX test/cpp_headers/fd.o 00:02:16.409 LINK lsvmd 00:02:16.409 CXX test/cpp_headers/file.o 00:02:16.409 LINK env_dpdk_post_init 00:02:16.409 CXX test/cpp_headers/ftl.o 00:02:16.409 LINK err_injection 00:02:16.409 CXX test/cpp_headers/gpt_spec.o 00:02:16.409 LINK poller_perf 00:02:16.409 LINK connect_stress 00:02:16.409 LINK startup 00:02:16.409 LINK app_repeat 00:02:16.409 CXX test/cpp_headers/hexlify.o 00:02:16.409 LINK pmr_persistence 00:02:16.409 LINK verify 00:02:16.409 LINK fused_ordering 00:02:16.409 LINK stub 00:02:16.409 LINK ioat_perf 00:02:16.409 CXX test/cpp_headers/histogram_data.o 00:02:16.409 LINK hotplug 00:02:16.409 LINK reserve 00:02:16.409 LINK bdev_svc 00:02:16.409 LINK cmb_copy 00:02:16.409 LINK hello_sock 00:02:16.409 LINK doorbell_aers 00:02:16.409 CXX test/cpp_headers/idxd.o 00:02:16.409 LINK simple_copy 00:02:16.409 CXX test/cpp_headers/idxd_spec.o 00:02:16.409 CXX test/cpp_headers/init.o 00:02:16.409 LINK hello_world 00:02:16.409 CXX test/cpp_headers/ioat.o 00:02:16.409 CXX test/cpp_headers/ioat_spec.o 00:02:16.409 CXX test/cpp_headers/iscsi_spec.o 00:02:16.409 CXX test/cpp_headers/json.o 00:02:16.409 LINK sgl 00:02:16.409 LINK mkfs 00:02:16.409 CXX test/cpp_headers/jsonrpc.o 00:02:16.409 LINK hello_bdev 00:02:16.409 CXX test/cpp_headers/keyring.o 00:02:16.409 LINK reset 00:02:16.409 LINK aer 00:02:16.409 LINK spdk_dd 00:02:16.409 LINK scheduler 00:02:16.409 CXX test/cpp_headers/keyring_module.o 00:02:16.673 LINK nvme_dp 00:02:16.673 LINK hello_blob 00:02:16.673 LINK thread 00:02:16.673 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:16.673 LINK overhead 00:02:16.673 CXX test/cpp_headers/likely.o 00:02:16.673 LINK nvme_compliance 00:02:16.673 LINK arbitration 00:02:16.673 LINK nvmf 00:02:16.673 CXX test/cpp_headers/log.o 00:02:16.673 CXX test/cpp_headers/lvol.o 00:02:16.673 LINK reconnect 00:02:16.673 LINK abort 00:02:16.673 LINK idxd_perf 00:02:16.673 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:16.673 LINK fdp 00:02:16.673 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:16.673 CXX test/cpp_headers/memory.o 00:02:16.673 LINK bdevio 00:02:16.673 CXX test/cpp_headers/mmio.o 00:02:16.673 CXX test/cpp_headers/nbd.o 00:02:16.673 LINK test_dma 00:02:16.673 CXX test/cpp_headers/notify.o 00:02:16.673 CXX test/cpp_headers/nvme.o 00:02:16.673 CXX test/cpp_headers/nvme_intel.o 00:02:16.673 CXX test/cpp_headers/nvme_ocssd.o 00:02:16.673 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:16.673 CXX test/cpp_headers/nvme_spec.o 00:02:16.673 CXX test/cpp_headers/nvme_zns.o 00:02:16.673 CXX test/cpp_headers/nvmf_cmd.o 00:02:16.673 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:16.673 CXX test/cpp_headers/nvmf.o 00:02:16.673 LINK pci_ut 00:02:16.673 CXX test/cpp_headers/nvmf_spec.o 00:02:16.673 CXX test/cpp_headers/nvmf_transport.o 00:02:16.673 CXX test/cpp_headers/opal.o 00:02:16.673 CXX test/cpp_headers/opal_spec.o 00:02:16.673 LINK spdk_trace 00:02:16.673 CXX test/cpp_headers/pci_ids.o 00:02:16.673 CXX test/cpp_headers/pipe.o 00:02:16.673 CXX test/cpp_headers/queue.o 00:02:16.933 CXX test/cpp_headers/reduce.o 00:02:16.933 CXX test/cpp_headers/rpc.o 00:02:16.933 LINK dif 00:02:16.934 CXX test/cpp_headers/scheduler.o 00:02:16.934 CXX test/cpp_headers/scsi.o 00:02:16.934 LINK accel_perf 00:02:16.934 CXX test/cpp_headers/scsi_spec.o 00:02:16.934 CXX test/cpp_headers/sock.o 00:02:16.934 CXX test/cpp_headers/stdinc.o 00:02:16.934 CXX test/cpp_headers/string.o 00:02:16.934 CXX test/cpp_headers/thread.o 00:02:16.934 CXX test/cpp_headers/trace.o 00:02:16.934 CXX test/cpp_headers/trace_parser.o 00:02:16.934 CXX test/cpp_headers/tree.o 00:02:16.934 CXX test/cpp_headers/ublk.o 00:02:16.934 CXX test/cpp_headers/util.o 00:02:16.934 CXX test/cpp_headers/uuid.o 00:02:16.934 CXX test/cpp_headers/version.o 00:02:16.934 CXX test/cpp_headers/vfio_user_pci.o 00:02:16.934 CXX test/cpp_headers/vfio_user_spec.o 00:02:16.934 CXX test/cpp_headers/vhost.o 00:02:16.934 LINK nvme_manage 00:02:16.934 CXX test/cpp_headers/vmd.o 00:02:16.934 CXX test/cpp_headers/xor.o 00:02:16.934 CXX test/cpp_headers/zipf.o 00:02:16.934 LINK blobcli 00:02:16.934 LINK spdk_bdev 00:02:16.934 LINK nvme_fuzz 00:02:17.192 LINK spdk_nvme 00:02:17.192 LINK mem_callbacks 00:02:17.192 LINK spdk_nvme_identify 00:02:17.192 LINK spdk_nvme_perf 00:02:17.192 LINK vhost_fuzz 00:02:17.451 LINK bdevperf 00:02:17.451 LINK spdk_top 00:02:17.451 LINK memory_ut 00:02:17.721 LINK cuse 00:02:18.049 LINK iscsi_fuzz 00:02:19.954 LINK esnap 00:02:20.213 00:02:20.213 real 0m42.419s 00:02:20.213 user 7m4.046s 00:02:20.213 sys 3m21.471s 00:02:20.213 10:29:49 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:20.213 10:29:49 make -- common/autotest_common.sh@10 -- $ set +x 00:02:20.213 ************************************ 00:02:20.213 END TEST make 00:02:20.213 ************************************ 00:02:20.473 10:29:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:20.473 10:29:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:20.473 10:29:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:20.473 10:29:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.473 10:29:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:20.473 10:29:49 -- pm/common@44 -- $ pid=3771607 00:02:20.473 10:29:49 -- pm/common@50 -- $ kill -TERM 3771607 00:02:20.473 10:29:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.473 10:29:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:20.473 10:29:49 -- pm/common@44 -- $ pid=3771609 00:02:20.473 10:29:49 -- pm/common@50 -- $ kill -TERM 3771609 00:02:20.473 10:29:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.473 10:29:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:20.473 10:29:49 -- pm/common@44 -- $ pid=3771610 00:02:20.473 10:29:49 -- pm/common@50 -- $ kill -TERM 3771610 00:02:20.473 10:29:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.473 10:29:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:20.473 10:29:49 -- pm/common@44 -- $ pid=3771628 00:02:20.473 10:29:49 -- pm/common@50 -- $ sudo -E kill -TERM 3771628 00:02:20.473 10:29:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:02:20.473 10:29:49 -- nvmf/common.sh@7 -- # uname -s 00:02:20.473 10:29:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:20.473 10:29:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:20.473 10:29:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:20.473 10:29:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:20.473 10:29:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:20.473 10:29:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:20.473 10:29:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:20.473 10:29:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:20.473 10:29:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:20.473 10:29:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:20.473 10:29:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:02:20.473 10:29:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:02:20.473 10:29:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:20.473 10:29:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:20.473 10:29:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:20.473 10:29:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:20.473 10:29:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:02:20.473 10:29:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:20.473 10:29:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.473 10:29:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.473 10:29:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.473 10:29:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.473 10:29:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.473 10:29:49 -- paths/export.sh@5 -- # export PATH 00:02:20.473 10:29:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.473 10:29:49 -- nvmf/common.sh@47 -- # : 0 00:02:20.473 10:29:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:20.473 10:29:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:20.473 10:29:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:20.473 10:29:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:20.473 10:29:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:20.473 10:29:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:20.473 10:29:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:20.473 10:29:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:20.473 10:29:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:20.474 10:29:49 -- spdk/autotest.sh@32 -- # uname -s 00:02:20.474 10:29:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:20.474 10:29:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:20.474 10:29:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:02:20.474 10:29:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:20.474 10:29:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:02:20.474 10:29:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:20.474 10:29:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:20.474 10:29:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:20.474 10:29:49 -- spdk/autotest.sh@48 -- # udevadm_pid=3829668 00:02:20.474 10:29:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:20.474 10:29:49 -- pm/common@17 -- # local monitor 00:02:20.474 10:29:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.474 10:29:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:20.474 10:29:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.474 10:29:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.474 10:29:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.474 10:29:49 -- pm/common@25 -- # sleep 1 00:02:20.474 10:29:49 -- pm/common@21 -- # date +%s 00:02:20.474 10:29:49 -- pm/common@21 -- # date +%s 00:02:20.474 10:29:49 -- pm/common@21 -- # date +%s 00:02:20.474 10:29:49 -- pm/common@21 -- # date +%s 00:02:20.474 10:29:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718008189 00:02:20.474 10:29:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718008189 00:02:20.474 10:29:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718008189 00:02:20.474 10:29:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718008189 00:02:20.474 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718008189_collect-vmstat.pm.log 00:02:20.474 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718008189_collect-cpu-load.pm.log 00:02:20.474 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718008189_collect-cpu-temp.pm.log 00:02:20.474 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718008189_collect-bmc-pm.bmc.pm.log 00:02:21.411 10:29:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:21.411 10:29:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:21.411 10:29:50 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:21.411 10:29:50 -- common/autotest_common.sh@10 -- # set +x 00:02:21.411 10:29:50 -- spdk/autotest.sh@59 -- # create_test_list 00:02:21.411 10:29:50 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:21.411 10:29:50 -- common/autotest_common.sh@10 -- # set +x 00:02:21.411 10:29:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/autotest.sh 00:02:21.411 10:29:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:21.411 10:29:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:21.411 10:29:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:02:21.411 10:29:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:02:21.411 10:29:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:21.411 10:29:50 -- common/autotest_common.sh@1454 -- # uname 00:02:21.411 10:29:50 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:21.411 10:29:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:21.411 10:29:50 -- common/autotest_common.sh@1474 -- # uname 00:02:21.411 10:29:50 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:21.411 10:29:50 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:21.670 10:29:50 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:21.670 10:29:50 -- spdk/autotest.sh@72 -- # hash lcov 00:02:21.670 10:29:50 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:21.670 10:29:50 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:21.670 --rc lcov_branch_coverage=1 00:02:21.670 --rc lcov_function_coverage=1 00:02:21.670 --rc genhtml_branch_coverage=1 00:02:21.670 --rc genhtml_function_coverage=1 00:02:21.670 --rc genhtml_legend=1 00:02:21.670 --rc geninfo_all_blocks=1 00:02:21.670 ' 00:02:21.670 10:29:50 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:21.670 --rc lcov_branch_coverage=1 00:02:21.670 --rc lcov_function_coverage=1 00:02:21.670 --rc genhtml_branch_coverage=1 00:02:21.670 --rc genhtml_function_coverage=1 00:02:21.670 --rc genhtml_legend=1 00:02:21.670 --rc geninfo_all_blocks=1 00:02:21.670 ' 00:02:21.670 10:29:50 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:21.670 --rc lcov_branch_coverage=1 00:02:21.670 --rc lcov_function_coverage=1 00:02:21.670 --rc genhtml_branch_coverage=1 00:02:21.670 --rc genhtml_function_coverage=1 00:02:21.670 --rc genhtml_legend=1 00:02:21.670 --rc geninfo_all_blocks=1 00:02:21.670 --no-external' 00:02:21.670 10:29:50 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:21.670 --rc lcov_branch_coverage=1 00:02:21.670 --rc lcov_function_coverage=1 00:02:21.670 --rc genhtml_branch_coverage=1 00:02:21.670 --rc genhtml_function_coverage=1 00:02:21.670 --rc genhtml_legend=1 00:02:21.670 --rc geninfo_all_blocks=1 00:02:21.670 --no-external' 00:02:21.670 10:29:50 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:21.670 lcov: LCOV version 1.14 00:02:21.670 10:29:50 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info 00:02:31.647 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:31.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:41.625 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:41.625 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:41.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:41.626 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:41.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:41.627 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:41.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:41.627 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:41.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:41.886 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:41.886 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:41.887 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:41.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:43.262 10:30:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:43.262 10:30:12 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:43.262 10:30:12 -- common/autotest_common.sh@10 -- # set +x 00:02:43.262 10:30:12 -- spdk/autotest.sh@91 -- # rm -f 00:02:43.262 10:30:12 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.797 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:02:45.797 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:45.797 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:45.797 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:45.797 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:45.797 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:46.084 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:46.343 10:30:15 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:46.343 10:30:15 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:46.343 10:30:15 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:46.343 10:30:15 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:46.343 10:30:15 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:46.343 10:30:15 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:46.343 10:30:15 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:46.343 10:30:15 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.343 10:30:15 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:46.343 10:30:15 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:46.343 10:30:15 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:02:46.343 10:30:15 -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:02:46.343 10:30:15 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:46.343 10:30:15 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:46.343 10:30:15 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:46.343 10:30:15 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n2 00:02:46.343 10:30:15 -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:02:46.343 10:30:15 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:46.343 10:30:15 -- common/autotest_common.sh@1664 -- # [[ host-managed != none ]] 00:02:46.343 10:30:15 -- common/autotest_common.sh@1673 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:46.343 10:30:15 -- spdk/autotest.sh@98 -- # (( 1 > 0 )) 00:02:46.343 10:30:15 -- spdk/autotest.sh@103 -- # export PCI_BLOCKED=0000:5f:00.0 00:02:46.343 10:30:15 -- spdk/autotest.sh@103 -- # PCI_BLOCKED=0000:5f:00.0 00:02:46.343 10:30:15 -- spdk/autotest.sh@104 -- # export PCI_ZONED=0000:5f:00.0 00:02:46.343 10:30:15 -- spdk/autotest.sh@104 -- # PCI_ZONED=0000:5f:00.0 00:02:46.343 10:30:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:46.343 10:30:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:46.343 10:30:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:46.343 10:30:15 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:46.343 10:30:15 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:46.343 No valid GPT data, bailing 00:02:46.343 10:30:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:46.343 10:30:15 -- scripts/common.sh@391 -- # pt= 00:02:46.343 10:30:15 -- scripts/common.sh@392 -- # return 1 00:02:46.343 10:30:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:46.343 1+0 records in 00:02:46.343 1+0 records out 00:02:46.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00549155 s, 191 MB/s 00:02:46.343 10:30:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:46.343 10:30:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:46.343 10:30:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:46.344 10:30:15 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:46.344 10:30:15 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:46.344 No valid GPT data, bailing 00:02:46.344 10:30:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:46.344 10:30:15 -- scripts/common.sh@391 -- # pt= 00:02:46.344 10:30:15 -- scripts/common.sh@392 -- # return 1 00:02:46.344 10:30:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:46.344 1+0 records in 00:02:46.344 1+0 records out 00:02:46.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525749 s, 199 MB/s 00:02:46.344 10:30:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:46.344 10:30:15 -- spdk/autotest.sh@112 -- # [[ -z 0000:5f:00.0 ]] 00:02:46.344 10:30:15 -- spdk/autotest.sh@112 -- # continue 00:02:46.344 10:30:15 -- spdk/autotest.sh@118 -- # sync 00:02:46.344 10:30:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:46.344 10:30:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:46.344 10:30:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.533 10:30:18 -- spdk/autotest.sh@124 -- # uname -s 00:02:50.534 10:30:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:50.534 10:30:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.534 10:30:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:50.534 10:30:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:50.534 10:30:18 -- common/autotest_common.sh@10 -- # set +x 00:02:50.534 ************************************ 00:02:50.534 START TEST setup.sh 00:02:50.534 ************************************ 00:02:50.534 10:30:18 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.534 * Looking for test storage... 00:02:50.534 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:02:50.534 10:30:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:50.534 10:30:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:50.534 10:30:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/acl.sh 00:02:50.534 10:30:19 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:50.534 10:30:19 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:50.534 10:30:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.534 ************************************ 00:02:50.534 START TEST acl 00:02:50.534 ************************************ 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/acl.sh 00:02:50.534 * Looking for test storage... 00:02:50.534 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:02:50.534 10:30:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n2 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ host-managed != none ]] 00:02:50.534 10:30:19 setup.sh.acl -- common/autotest_common.sh@1673 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:50.534 10:30:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:50.534 10:30:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:50.534 10:30:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:50.534 10:30:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.534 10:30:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:50.534 10:30:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.534 10:30:19 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.825 10:30:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:53.825 10:30:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:53.825 10:30:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.825 10:30:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:53.825 10:30:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.825 10:30:22 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:02:55.772 Hugepages 00:02:55.772 node hugesize free / total 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.772 00:02:55.772 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:55.772 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@21 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:56.032 10:30:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:56.032 10:30:25 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:56.032 10:30:25 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:56.032 10:30:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:56.033 ************************************ 00:02:56.033 START TEST denied 00:02:56.033 ************************************ 00:02:56.033 10:30:25 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:02:56.033 10:30:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED='0000:5f:00.0 0000:5e:00.0' 00:02:56.033 10:30:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:56.033 10:30:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:56.033 10:30:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.033 10:30:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:02:59.324 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.324 10:30:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.520 00:03:03.520 real 0m7.236s 00:03:03.520 user 0m2.389s 00:03:03.520 sys 0m4.082s 00:03:03.520 10:30:32 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:03.520 10:30:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:03.520 ************************************ 00:03:03.520 END TEST denied 00:03:03.520 ************************************ 00:03:03.520 10:30:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:03.520 10:30:32 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:03.520 10:30:32 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:03.520 10:30:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:03.520 ************************************ 00:03:03.520 START TEST allowed 00:03:03.520 ************************************ 00:03:03.520 10:30:32 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:03.520 10:30:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:03.520 10:30:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:03.520 10:30:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:03.520 10:30:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.520 10:30:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:07.714 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:07.714 10:30:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:07.714 10:30:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:07.714 10:30:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:07.714 10:30:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.714 10:30:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.004 00:03:11.004 real 0m7.218s 00:03:11.004 user 0m2.210s 00:03:11.004 sys 0m4.066s 00:03:11.004 10:30:39 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:11.004 10:30:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:11.004 ************************************ 00:03:11.004 END TEST allowed 00:03:11.004 ************************************ 00:03:11.004 00:03:11.004 real 0m20.477s 00:03:11.004 user 0m6.750s 00:03:11.004 sys 0m12.064s 00:03:11.004 10:30:39 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:11.004 10:30:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.004 ************************************ 00:03:11.004 END TEST acl 00:03:11.004 ************************************ 00:03:11.004 10:30:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.004 10:30:39 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:11.004 10:30:39 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:11.004 10:30:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.004 ************************************ 00:03:11.004 START TEST hugepages 00:03:11.004 ************************************ 00:03:11.004 10:30:39 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.004 * Looking for test storage... 00:03:11.004 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 75900084 kB' 'MemAvailable: 79318016 kB' 'Buffers: 2696 kB' 'Cached: 9733220 kB' 'SwapCached: 0 kB' 'Active: 6702540 kB' 'Inactive: 3519504 kB' 'Active(anon): 6315432 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489508 kB' 'Mapped: 210864 kB' 'Shmem: 5829304 kB' 'KReclaimable: 215092 kB' 'Slab: 702372 kB' 'SReclaimable: 215092 kB' 'SUnreclaim: 487280 kB' 'KernelStack: 19920 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52952932 kB' 'Committed_AS: 7716992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220960 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.004 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.005 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:11.006 10:30:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:11.006 10:30:39 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:11.006 10:30:39 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:11.006 10:30:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.006 ************************************ 00:03:11.006 START TEST default_setup 00:03:11.006 ************************************ 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.006 10:30:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:13.542 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:13.801 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:13.801 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.060 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.060 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.060 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.060 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:15.004 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78062112 kB' 'MemAvailable: 81479768 kB' 'Buffers: 2696 kB' 'Cached: 9733344 kB' 'SwapCached: 0 kB' 'Active: 6719928 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332820 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506576 kB' 'Mapped: 211516 kB' 'Shmem: 5829428 kB' 'KReclaimable: 214540 kB' 'Slab: 700192 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485652 kB' 'KernelStack: 20016 kB' 'PageTables: 9572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7740108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221008 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.004 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78061332 kB' 'MemAvailable: 81478988 kB' 'Buffers: 2696 kB' 'Cached: 9733344 kB' 'SwapCached: 0 kB' 'Active: 6723764 kB' 'Inactive: 3519504 kB' 'Active(anon): 6336656 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510356 kB' 'Mapped: 211756 kB' 'Shmem: 5829428 kB' 'KReclaimable: 214540 kB' 'Slab: 700184 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485644 kB' 'KernelStack: 20016 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7743172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221044 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:15.005 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.006 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78060580 kB' 'MemAvailable: 81478236 kB' 'Buffers: 2696 kB' 'Cached: 9733364 kB' 'SwapCached: 0 kB' 'Active: 6717888 kB' 'Inactive: 3519504 kB' 'Active(anon): 6330780 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504356 kB' 'Mapped: 211260 kB' 'Shmem: 5829448 kB' 'KReclaimable: 214540 kB' 'Slab: 699876 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485336 kB' 'KernelStack: 19952 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7735588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221056 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.007 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.008 nr_hugepages=1024 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.008 resv_hugepages=0 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.008 surplus_hugepages=0 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.008 anon_hugepages=0 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78059112 kB' 'MemAvailable: 81476768 kB' 'Buffers: 2696 kB' 'Cached: 9733384 kB' 'SwapCached: 0 kB' 'Active: 6718552 kB' 'Inactive: 3519504 kB' 'Active(anon): 6331444 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505036 kB' 'Mapped: 210980 kB' 'Shmem: 5829468 kB' 'KReclaimable: 214540 kB' 'Slab: 699812 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485272 kB' 'KernelStack: 20192 kB' 'PageTables: 9552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7737096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221024 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.008 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 27600180 kB' 'MemUsed: 5034448 kB' 'SwapCached: 0 kB' 'Active: 1543988 kB' 'Inactive: 58760 kB' 'Active(anon): 1396760 kB' 'Inactive(anon): 0 kB' 'Active(file): 147228 kB' 'Inactive(file): 58760 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1427724 kB' 'Mapped: 91408 kB' 'AnonPages: 178124 kB' 'Shmem: 1221736 kB' 'KernelStack: 9928 kB' 'PageTables: 4816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72864 kB' 'Slab: 332376 kB' 'SReclaimable: 72864 kB' 'SUnreclaim: 259512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.009 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.010 node0=1024 expecting 1024 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.010 00:03:15.010 real 0m4.162s 00:03:15.010 user 0m1.416s 00:03:15.010 sys 0m1.997s 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:15.010 10:30:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:15.010 ************************************ 00:03:15.010 END TEST default_setup 00:03:15.010 ************************************ 00:03:15.010 10:30:44 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:15.010 10:30:44 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:15.010 10:30:44 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:15.010 10:30:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.270 ************************************ 00:03:15.270 START TEST per_node_1G_alloc 00:03:15.270 ************************************ 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.270 10:30:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:17.807 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:17.807 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.807 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.807 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.072 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78062004 kB' 'MemAvailable: 81480164 kB' 'Buffers: 2696 kB' 'Cached: 9733484 kB' 'SwapCached: 0 kB' 'Active: 6716212 kB' 'Inactive: 3519504 kB' 'Active(anon): 6329104 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502352 kB' 'Mapped: 209948 kB' 'Shmem: 5829568 kB' 'KReclaimable: 214540 kB' 'Slab: 700572 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 486032 kB' 'KernelStack: 19760 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7725036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220992 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.073 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78062380 kB' 'MemAvailable: 81480036 kB' 'Buffers: 2696 kB' 'Cached: 9733488 kB' 'SwapCached: 0 kB' 'Active: 6715372 kB' 'Inactive: 3519504 kB' 'Active(anon): 6328264 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502032 kB' 'Mapped: 209820 kB' 'Shmem: 5829572 kB' 'KReclaimable: 214540 kB' 'Slab: 700516 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485976 kB' 'KernelStack: 19744 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7725052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220976 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.074 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.075 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78062380 kB' 'MemAvailable: 81480036 kB' 'Buffers: 2696 kB' 'Cached: 9733508 kB' 'SwapCached: 0 kB' 'Active: 6715416 kB' 'Inactive: 3519504 kB' 'Active(anon): 6328308 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502032 kB' 'Mapped: 209820 kB' 'Shmem: 5829592 kB' 'KReclaimable: 214540 kB' 'Slab: 700516 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485976 kB' 'KernelStack: 19744 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7725076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220976 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.076 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.077 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.078 nr_hugepages=1024 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.078 resv_hugepages=0 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.078 surplus_hugepages=0 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.078 anon_hugepages=0 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.078 10:30:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.078 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78062884 kB' 'MemAvailable: 81480540 kB' 'Buffers: 2696 kB' 'Cached: 9733528 kB' 'SwapCached: 0 kB' 'Active: 6715836 kB' 'Inactive: 3519504 kB' 'Active(anon): 6328728 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502416 kB' 'Mapped: 210324 kB' 'Shmem: 5829612 kB' 'KReclaimable: 214540 kB' 'Slab: 700516 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485976 kB' 'KernelStack: 19728 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7726324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220960 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.079 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.080 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 28660460 kB' 'MemUsed: 3974168 kB' 'SwapCached: 0 kB' 'Active: 1547756 kB' 'Inactive: 58760 kB' 'Active(anon): 1400528 kB' 'Inactive(anon): 0 kB' 'Active(file): 147228 kB' 'Inactive(file): 58760 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1427720 kB' 'Mapped: 91024 kB' 'AnonPages: 182480 kB' 'Shmem: 1221732 kB' 'KernelStack: 9480 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72864 kB' 'Slab: 332868 kB' 'SReclaimable: 72864 kB' 'SUnreclaim: 260004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.081 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688336 kB' 'MemFree: 49395496 kB' 'MemUsed: 11292840 kB' 'SwapCached: 0 kB' 'Active: 5172668 kB' 'Inactive: 3460744 kB' 'Active(anon): 4932788 kB' 'Inactive(anon): 0 kB' 'Active(file): 239880 kB' 'Inactive(file): 3460744 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8308552 kB' 'Mapped: 119452 kB' 'AnonPages: 324996 kB' 'Shmem: 4607928 kB' 'KernelStack: 10264 kB' 'PageTables: 4688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 141676 kB' 'Slab: 367648 kB' 'SReclaimable: 141676 kB' 'SUnreclaim: 225972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.082 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.083 node0=512 expecting 512 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.083 node1=512 expecting 512 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.083 00:03:18.083 real 0m3.052s 00:03:18.083 user 0m1.210s 00:03:18.083 sys 0m1.862s 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:18.083 10:30:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.083 ************************************ 00:03:18.083 END TEST per_node_1G_alloc 00:03:18.083 ************************************ 00:03:18.343 10:30:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:18.343 10:30:47 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:18.344 10:30:47 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:18.344 10:30:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.344 ************************************ 00:03:18.344 START TEST even_2G_alloc 00:03:18.344 ************************************ 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.344 10:30:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:20.947 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:21.208 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.208 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.208 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78074616 kB' 'MemAvailable: 81492272 kB' 'Buffers: 2696 kB' 'Cached: 9733648 kB' 'SwapCached: 0 kB' 'Active: 6717724 kB' 'Inactive: 3519504 kB' 'Active(anon): 6330616 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503856 kB' 'Mapped: 209924 kB' 'Shmem: 5829732 kB' 'KReclaimable: 214540 kB' 'Slab: 699876 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485336 kB' 'KernelStack: 19760 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7725712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221008 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.208 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.209 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78075324 kB' 'MemAvailable: 81492980 kB' 'Buffers: 2696 kB' 'Cached: 9733648 kB' 'SwapCached: 0 kB' 'Active: 6717280 kB' 'Inactive: 3519504 kB' 'Active(anon): 6330172 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503396 kB' 'Mapped: 209908 kB' 'Shmem: 5829732 kB' 'KReclaimable: 214540 kB' 'Slab: 699876 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485336 kB' 'KernelStack: 19728 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7725728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220992 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.210 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.211 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.475 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78075728 kB' 'MemAvailable: 81493384 kB' 'Buffers: 2696 kB' 'Cached: 9733664 kB' 'SwapCached: 0 kB' 'Active: 6717756 kB' 'Inactive: 3519504 kB' 'Active(anon): 6330648 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504356 kB' 'Mapped: 210336 kB' 'Shmem: 5829748 kB' 'KReclaimable: 214540 kB' 'Slab: 699852 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485312 kB' 'KernelStack: 19760 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7727504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220992 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.478 nr_hugepages=1024 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.478 resv_hugepages=0 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.478 surplus_hugepages=0 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.478 anon_hugepages=0 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78072472 kB' 'MemAvailable: 81490128 kB' 'Buffers: 2696 kB' 'Cached: 9733664 kB' 'SwapCached: 0 kB' 'Active: 6720156 kB' 'Inactive: 3519504 kB' 'Active(anon): 6333048 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506756 kB' 'Mapped: 210336 kB' 'Shmem: 5829748 kB' 'KReclaimable: 214540 kB' 'Slab: 699852 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485312 kB' 'KernelStack: 19744 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7729768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220976 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.478 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.479 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 28671456 kB' 'MemUsed: 3963172 kB' 'SwapCached: 0 kB' 'Active: 1549872 kB' 'Inactive: 58760 kB' 'Active(anon): 1402644 kB' 'Inactive(anon): 0 kB' 'Active(file): 147228 kB' 'Inactive(file): 58760 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1427756 kB' 'Mapped: 91144 kB' 'AnonPages: 184088 kB' 'Shmem: 1221768 kB' 'KernelStack: 9480 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72864 kB' 'Slab: 332552 kB' 'SReclaimable: 72864 kB' 'SUnreclaim: 259688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688336 kB' 'MemFree: 49399000 kB' 'MemUsed: 11289336 kB' 'SwapCached: 0 kB' 'Active: 5173148 kB' 'Inactive: 3460744 kB' 'Active(anon): 4933268 kB' 'Inactive(anon): 0 kB' 'Active(file): 239880 kB' 'Inactive(file): 3460744 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8308608 kB' 'Mapped: 119432 kB' 'AnonPages: 325568 kB' 'Shmem: 4607984 kB' 'KernelStack: 10264 kB' 'PageTables: 4644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 141676 kB' 'Slab: 367292 kB' 'SReclaimable: 141676 kB' 'SUnreclaim: 225616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.481 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.483 node0=512 expecting 512 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:21.483 node1=512 expecting 512 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:21.483 00:03:21.483 real 0m3.167s 00:03:21.483 user 0m1.216s 00:03:21.483 sys 0m1.917s 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:21.483 10:30:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.483 ************************************ 00:03:21.483 END TEST even_2G_alloc 00:03:21.483 ************************************ 00:03:21.483 10:30:50 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:21.483 10:30:50 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:21.483 10:30:50 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:21.483 10:30:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.483 ************************************ 00:03:21.483 START TEST odd_alloc 00:03:21.483 ************************************ 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.483 10:30:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:24.017 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:24.017 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.017 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.017 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.282 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78096344 kB' 'MemAvailable: 81514000 kB' 'Buffers: 2696 kB' 'Cached: 9733792 kB' 'SwapCached: 0 kB' 'Active: 6719300 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332192 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505156 kB' 'Mapped: 209872 kB' 'Shmem: 5829876 kB' 'KReclaimable: 214540 kB' 'Slab: 700288 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485748 kB' 'KernelStack: 19712 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000484 kB' 'Committed_AS: 7728728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221056 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.283 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78099196 kB' 'MemAvailable: 81516852 kB' 'Buffers: 2696 kB' 'Cached: 9733792 kB' 'SwapCached: 0 kB' 'Active: 6719652 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332544 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505536 kB' 'Mapped: 209848 kB' 'Shmem: 5829876 kB' 'KReclaimable: 214540 kB' 'Slab: 700268 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485728 kB' 'KernelStack: 19904 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000484 kB' 'Committed_AS: 7728216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221168 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.284 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.285 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78098172 kB' 'MemAvailable: 81515828 kB' 'Buffers: 2696 kB' 'Cached: 9733792 kB' 'SwapCached: 0 kB' 'Active: 6719804 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332696 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505608 kB' 'Mapped: 209848 kB' 'Shmem: 5829876 kB' 'KReclaimable: 214540 kB' 'Slab: 700252 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485712 kB' 'KernelStack: 19920 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000484 kB' 'Committed_AS: 7728396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221136 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.286 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.287 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:24.288 nr_hugepages=1025 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.288 resv_hugepages=0 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.288 surplus_hugepages=0 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.288 anon_hugepages=0 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78096940 kB' 'MemAvailable: 81514596 kB' 'Buffers: 2696 kB' 'Cached: 9733808 kB' 'SwapCached: 0 kB' 'Active: 6719584 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332476 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505348 kB' 'Mapped: 209848 kB' 'Shmem: 5829892 kB' 'KReclaimable: 214540 kB' 'Slab: 700252 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485712 kB' 'KernelStack: 20048 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000484 kB' 'Committed_AS: 7728420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221088 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.288 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.289 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 28696660 kB' 'MemUsed: 3937968 kB' 'SwapCached: 0 kB' 'Active: 1543292 kB' 'Inactive: 58760 kB' 'Active(anon): 1396064 kB' 'Inactive(anon): 0 kB' 'Active(file): 147228 kB' 'Inactive(file): 58760 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1427880 kB' 'Mapped: 90544 kB' 'AnonPages: 177280 kB' 'Shmem: 1221892 kB' 'KernelStack: 9400 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72864 kB' 'Slab: 332516 kB' 'SReclaimable: 72864 kB' 'SUnreclaim: 259652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.290 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.291 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688336 kB' 'MemFree: 49400724 kB' 'MemUsed: 11287612 kB' 'SwapCached: 0 kB' 'Active: 5175356 kB' 'Inactive: 3460744 kB' 'Active(anon): 4935476 kB' 'Inactive(anon): 0 kB' 'Active(file): 239880 kB' 'Inactive(file): 3460744 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8308632 kB' 'Mapped: 119232 kB' 'AnonPages: 327588 kB' 'Shmem: 4608008 kB' 'KernelStack: 10440 kB' 'PageTables: 5096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 141676 kB' 'Slab: 367704 kB' 'SReclaimable: 141676 kB' 'SUnreclaim: 226028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.292 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:24.293 node0=512 expecting 513 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:24.293 node1=513 expecting 512 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:24.293 00:03:24.293 real 0m2.920s 00:03:24.293 user 0m1.090s 00:03:24.293 sys 0m1.768s 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:24.293 10:30:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.293 ************************************ 00:03:24.293 END TEST odd_alloc 00:03:24.293 ************************************ 00:03:24.553 10:30:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:24.553 10:30:53 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:24.553 10:30:53 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:24.553 10:30:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.553 ************************************ 00:03:24.553 START TEST custom_alloc 00:03:24.553 ************************************ 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:24.553 10:30:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.554 10:30:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:27.090 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:27.349 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.349 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.349 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 77055820 kB' 'MemAvailable: 80473476 kB' 'Buffers: 2696 kB' 'Cached: 9733944 kB' 'SwapCached: 0 kB' 'Active: 6719820 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332712 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505344 kB' 'Mapped: 210004 kB' 'Shmem: 5830028 kB' 'KReclaimable: 214540 kB' 'Slab: 700296 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485756 kB' 'KernelStack: 19824 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477220 kB' 'Committed_AS: 7726792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220992 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.614 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.615 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 77056264 kB' 'MemAvailable: 80473920 kB' 'Buffers: 2696 kB' 'Cached: 9733948 kB' 'SwapCached: 0 kB' 'Active: 6719016 kB' 'Inactive: 3519504 kB' 'Active(anon): 6331908 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504604 kB' 'Mapped: 209956 kB' 'Shmem: 5830032 kB' 'KReclaimable: 214540 kB' 'Slab: 700288 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485748 kB' 'KernelStack: 19824 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477220 kB' 'Committed_AS: 7726808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220944 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.616 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.617 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 77056900 kB' 'MemAvailable: 80474556 kB' 'Buffers: 2696 kB' 'Cached: 9733976 kB' 'SwapCached: 0 kB' 'Active: 6718356 kB' 'Inactive: 3519504 kB' 'Active(anon): 6331248 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504396 kB' 'Mapped: 209880 kB' 'Shmem: 5830060 kB' 'KReclaimable: 214540 kB' 'Slab: 700280 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485740 kB' 'KernelStack: 19744 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477220 kB' 'Committed_AS: 7726828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220960 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.618 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.619 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:27.620 nr_hugepages=1536 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.620 resv_hugepages=0 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.620 surplus_hugepages=0 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.620 anon_hugepages=0 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 77056996 kB' 'MemAvailable: 80474652 kB' 'Buffers: 2696 kB' 'Cached: 9733984 kB' 'SwapCached: 0 kB' 'Active: 6718116 kB' 'Inactive: 3519504 kB' 'Active(anon): 6331008 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504540 kB' 'Mapped: 209880 kB' 'Shmem: 5830068 kB' 'KReclaimable: 214540 kB' 'Slab: 700280 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485740 kB' 'KernelStack: 19712 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477220 kB' 'Committed_AS: 7726484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220960 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.620 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.621 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 28687540 kB' 'MemUsed: 3947088 kB' 'SwapCached: 0 kB' 'Active: 1543584 kB' 'Inactive: 58760 kB' 'Active(anon): 1396356 kB' 'Inactive(anon): 0 kB' 'Active(file): 147228 kB' 'Inactive(file): 58760 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1428036 kB' 'Mapped: 90556 kB' 'AnonPages: 177468 kB' 'Shmem: 1222048 kB' 'KernelStack: 9448 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72864 kB' 'Slab: 332392 kB' 'SReclaimable: 72864 kB' 'SUnreclaim: 259528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.622 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688336 kB' 'MemFree: 48369856 kB' 'MemUsed: 12318480 kB' 'SwapCached: 0 kB' 'Active: 5174996 kB' 'Inactive: 3460744 kB' 'Active(anon): 4935116 kB' 'Inactive(anon): 0 kB' 'Active(file): 239880 kB' 'Inactive(file): 3460744 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8308668 kB' 'Mapped: 119324 kB' 'AnonPages: 327096 kB' 'Shmem: 4608044 kB' 'KernelStack: 10248 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 141676 kB' 'Slab: 367888 kB' 'SReclaimable: 141676 kB' 'SUnreclaim: 226212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.623 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.624 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.625 node0=512 expecting 512 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:27.625 node1=1024 expecting 1024 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:27.625 00:03:27.625 real 0m3.264s 00:03:27.625 user 0m1.350s 00:03:27.625 sys 0m1.979s 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.625 10:30:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.625 ************************************ 00:03:27.625 END TEST custom_alloc 00:03:27.625 ************************************ 00:03:27.884 10:30:56 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:27.884 10:30:56 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.884 10:30:56 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.884 10:30:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.884 ************************************ 00:03:27.884 START TEST no_shrink_alloc 00:03:27.884 ************************************ 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.884 10:30:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:30.419 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:30.419 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.419 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.419 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78082440 kB' 'MemAvailable: 81500096 kB' 'Buffers: 2696 kB' 'Cached: 9734100 kB' 'SwapCached: 0 kB' 'Active: 6719784 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332676 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505196 kB' 'Mapped: 209944 kB' 'Shmem: 5830184 kB' 'KReclaimable: 214540 kB' 'Slab: 699928 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485388 kB' 'KernelStack: 19808 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7727488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220992 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.684 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.685 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78083592 kB' 'MemAvailable: 81501248 kB' 'Buffers: 2696 kB' 'Cached: 9734104 kB' 'SwapCached: 0 kB' 'Active: 6719352 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332244 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505260 kB' 'Mapped: 209812 kB' 'Shmem: 5830188 kB' 'KReclaimable: 214540 kB' 'Slab: 699912 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485372 kB' 'KernelStack: 19808 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7727508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220944 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.686 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.687 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78083088 kB' 'MemAvailable: 81500744 kB' 'Buffers: 2696 kB' 'Cached: 9734120 kB' 'SwapCached: 0 kB' 'Active: 6719344 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332236 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505228 kB' 'Mapped: 209812 kB' 'Shmem: 5830204 kB' 'KReclaimable: 214540 kB' 'Slab: 699912 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485372 kB' 'KernelStack: 19776 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7727528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220944 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.688 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.689 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.690 nr_hugepages=1024 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.690 resv_hugepages=0 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.690 surplus_hugepages=0 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.690 anon_hugepages=0 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78084552 kB' 'MemAvailable: 81502208 kB' 'Buffers: 2696 kB' 'Cached: 9734148 kB' 'SwapCached: 0 kB' 'Active: 6719008 kB' 'Inactive: 3519504 kB' 'Active(anon): 6331900 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504884 kB' 'Mapped: 209872 kB' 'Shmem: 5830232 kB' 'KReclaimable: 214540 kB' 'Slab: 699912 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 485372 kB' 'KernelStack: 19744 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7727552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220944 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.690 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.691 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 27638352 kB' 'MemUsed: 4996276 kB' 'SwapCached: 0 kB' 'Active: 1543648 kB' 'Inactive: 58760 kB' 'Active(anon): 1396420 kB' 'Inactive(anon): 0 kB' 'Active(file): 147228 kB' 'Inactive(file): 58760 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1428140 kB' 'Mapped: 90568 kB' 'AnonPages: 177432 kB' 'Shmem: 1222152 kB' 'KernelStack: 9464 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72864 kB' 'Slab: 331808 kB' 'SReclaimable: 72864 kB' 'SUnreclaim: 258944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.692 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.693 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.694 node0=1024 expecting 1024 00:03:30.694 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.694 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:30.694 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:30.694 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:30.694 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.694 10:30:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:33.228 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:33.228 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.228 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.228 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.492 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:33.492 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78067196 kB' 'MemAvailable: 81484852 kB' 'Buffers: 2696 kB' 'Cached: 9734236 kB' 'SwapCached: 0 kB' 'Active: 6719360 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332252 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505180 kB' 'Mapped: 209908 kB' 'Shmem: 5830320 kB' 'KReclaimable: 214540 kB' 'Slab: 700836 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 486296 kB' 'KernelStack: 19696 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7728344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220928 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.493 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.494 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78068152 kB' 'MemAvailable: 81485808 kB' 'Buffers: 2696 kB' 'Cached: 9734240 kB' 'SwapCached: 0 kB' 'Active: 6719440 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332332 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505252 kB' 'Mapped: 209884 kB' 'Shmem: 5830324 kB' 'KReclaimable: 214540 kB' 'Slab: 700820 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 486280 kB' 'KernelStack: 19744 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7728364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220880 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.495 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.496 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78067144 kB' 'MemAvailable: 81484800 kB' 'Buffers: 2696 kB' 'Cached: 9734256 kB' 'SwapCached: 0 kB' 'Active: 6719660 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332552 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505484 kB' 'Mapped: 209884 kB' 'Shmem: 5830340 kB' 'KReclaimable: 214540 kB' 'Slab: 700876 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 486336 kB' 'KernelStack: 19728 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7728384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220880 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.497 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.498 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.499 nr_hugepages=1024 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.499 resv_hugepages=0 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.499 surplus_hugepages=0 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.499 anon_hugepages=0 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322964 kB' 'MemFree: 78066388 kB' 'MemAvailable: 81484044 kB' 'Buffers: 2696 kB' 'Cached: 9734256 kB' 'SwapCached: 0 kB' 'Active: 6719156 kB' 'Inactive: 3519504 kB' 'Active(anon): 6332048 kB' 'Inactive(anon): 0 kB' 'Active(file): 387108 kB' 'Inactive(file): 3519504 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504980 kB' 'Mapped: 209884 kB' 'Shmem: 5830340 kB' 'KReclaimable: 214540 kB' 'Slab: 700876 kB' 'SReclaimable: 214540 kB' 'SUnreclaim: 486336 kB' 'KernelStack: 19728 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001508 kB' 'Committed_AS: 7728408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220880 kB' 'VmallocChunk: 0 kB' 'Percpu: 65280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1809364 kB' 'DirectMap2M: 14647296 kB' 'DirectMap1G: 85983232 kB' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.499 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.500 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 27640704 kB' 'MemUsed: 4993924 kB' 'SwapCached: 0 kB' 'Active: 1544260 kB' 'Inactive: 58760 kB' 'Active(anon): 1397032 kB' 'Inactive(anon): 0 kB' 'Active(file): 147228 kB' 'Inactive(file): 58760 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1428260 kB' 'Mapped: 90576 kB' 'AnonPages: 178052 kB' 'Shmem: 1222272 kB' 'KernelStack: 9480 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 72864 kB' 'Slab: 332664 kB' 'SReclaimable: 72864 kB' 'SUnreclaim: 259800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.501 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.502 node0=1024 expecting 1024 00:03:33.502 10:31:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.502 00:03:33.502 real 0m5.770s 00:03:33.502 user 0m2.173s 00:03:33.502 sys 0m3.591s 00:03:33.503 10:31:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:33.503 10:31:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:33.503 ************************************ 00:03:33.503 END TEST no_shrink_alloc 00:03:33.503 ************************************ 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.503 10:31:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.503 00:03:33.503 real 0m22.833s 00:03:33.503 user 0m8.673s 00:03:33.503 sys 0m13.432s 00:03:33.503 10:31:02 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:33.503 10:31:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.503 ************************************ 00:03:33.503 END TEST hugepages 00:03:33.503 ************************************ 00:03:33.762 10:31:02 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/driver.sh 00:03:33.762 10:31:02 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:33.762 10:31:02 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:33.762 10:31:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.762 ************************************ 00:03:33.762 START TEST driver 00:03:33.762 ************************************ 00:03:33.762 10:31:02 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/driver.sh 00:03:33.762 * Looking for test storage... 00:03:33.762 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:33.762 10:31:02 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:33.762 10:31:02 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.762 10:31:02 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.951 10:31:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:37.951 10:31:06 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:37.952 10:31:06 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:37.952 10:31:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.210 ************************************ 00:03:38.210 START TEST guess_driver 00:03:38.210 ************************************ 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 220 > 0 )) 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:38.210 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:38.210 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:38.210 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:38.210 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:38.210 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:38.210 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:38.210 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:38.210 Looking for driver=vfio-pci 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.210 10:31:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:40.743 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ denied == \-\> ]] 00:03:40.743 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.744 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.003 10:31:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.941 10:31:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.941 10:31:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.941 10:31:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.941 10:31:10 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:41.941 10:31:10 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:41.941 10:31:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.941 10:31:10 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.169 00:03:46.169 real 0m8.063s 00:03:46.169 user 0m2.361s 00:03:46.169 sys 0m4.114s 00:03:46.169 10:31:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.169 10:31:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.169 ************************************ 00:03:46.169 END TEST guess_driver 00:03:46.169 ************************************ 00:03:46.169 00:03:46.169 real 0m12.553s 00:03:46.169 user 0m3.706s 00:03:46.169 sys 0m6.518s 00:03:46.169 10:31:15 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.169 10:31:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.169 ************************************ 00:03:46.169 END TEST driver 00:03:46.169 ************************************ 00:03:46.169 10:31:15 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/devices.sh 00:03:46.169 10:31:15 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:46.169 10:31:15 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:46.169 10:31:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.169 ************************************ 00:03:46.169 START TEST devices 00:03:46.169 ************************************ 00:03:46.169 10:31:15 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/devices.sh 00:03:46.428 * Looking for test storage... 00:03:46.428 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:46.428 10:31:15 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:46.428 10:31:15 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:46.428 10:31:15 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.428 10:31:15 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n2 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ host-managed != none ]] 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:49.720 10:31:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:49.720 10:31:18 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:49.720 No valid GPT data, bailing 00:03:49.720 10:31:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.720 10:31:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:49.720 10:31:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:49.720 10:31:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:49.720 10:31:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:49.720 10:31:18 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@203 -- # continue 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@203 -- # continue 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:49.720 10:31:18 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:49.720 10:31:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.980 ************************************ 00:03:49.980 START TEST nvme_mount 00:03:49.980 ************************************ 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.980 10:31:18 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:50.918 Creating new GPT entries in memory. 00:03:50.918 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.918 other utilities. 00:03:50.918 10:31:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.918 10:31:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.918 10:31:19 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.918 10:31:19 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.918 10:31:19 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:51.856 Creating new GPT entries in memory. 00:03:51.856 The operation has completed successfully. 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3863823 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:51.856 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.116 10:31:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:54.651 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.651 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.911 10:31:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.171 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.171 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.432 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:55.432 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:55.432 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.432 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.432 10:31:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:57.971 10:31:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.230 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.230 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:58.230 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.230 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.230 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.231 10:31:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.765 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.766 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.766 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.766 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.766 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.025 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.025 00:04:01.025 real 0m11.161s 00:04:01.025 user 0m3.219s 00:04:01.025 sys 0m5.601s 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:01.025 10:31:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:01.025 ************************************ 00:04:01.025 END TEST nvme_mount 00:04:01.025 ************************************ 00:04:01.025 10:31:29 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:01.025 10:31:29 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:01.025 10:31:29 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:01.025 10:31:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.025 ************************************ 00:04:01.025 START TEST dm_mount 00:04:01.025 ************************************ 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.025 10:31:29 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:02.404 Creating new GPT entries in memory. 00:04:02.404 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:02.404 other utilities. 00:04:02.404 10:31:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:02.404 10:31:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.404 10:31:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.404 10:31:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.404 10:31:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:03.340 Creating new GPT entries in memory. 00:04:03.340 The operation has completed successfully. 00:04:03.340 10:31:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.340 10:31:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.340 10:31:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.340 10:31:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.340 10:31:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:04.276 The operation has completed successfully. 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3868343 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount size= 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.276 10:31:33 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:04:06.809 10:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.809 10:31:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.067 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.067 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:07.067 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:07.067 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.067 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.326 10:31:36 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:04:09.920 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:09.920 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.178 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.178 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:10.178 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.178 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.178 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.179 10:31:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:10.179 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:10.179 00:04:10.179 real 0m9.192s 00:04:10.179 user 0m2.300s 00:04:10.179 sys 0m3.881s 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:10.179 10:31:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:10.179 ************************************ 00:04:10.179 END TEST dm_mount 00:04:10.179 ************************************ 00:04:10.437 10:31:39 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:10.437 10:31:39 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:10.438 10:31:39 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.438 10:31:39 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.438 10:31:39 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.438 10:31:39 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.438 10:31:39 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.697 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.697 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.697 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.697 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.697 10:31:39 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:10.697 10:31:39 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:04:10.697 10:31:39 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.697 10:31:39 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.697 10:31:39 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.697 10:31:39 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.697 10:31:39 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:10.697 00:04:10.697 real 0m24.329s 00:04:10.697 user 0m6.928s 00:04:10.697 sys 0m11.909s 00:04:10.697 10:31:39 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:10.697 10:31:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.697 ************************************ 00:04:10.697 END TEST devices 00:04:10.697 ************************************ 00:04:10.697 00:04:10.697 real 1m20.540s 00:04:10.697 user 0m26.182s 00:04:10.697 sys 0m44.169s 00:04:10.697 10:31:39 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:10.697 10:31:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.697 ************************************ 00:04:10.697 END TEST setup.sh 00:04:10.697 ************************************ 00:04:10.697 10:31:39 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:04:13.990 Hugepages 00:04:13.990 node hugesize free / total 00:04:13.990 node0 1048576kB 0 / 0 00:04:13.990 node0 2048kB 2048 / 2048 00:04:13.990 node1 1048576kB 0 / 0 00:04:13.990 node1 2048kB 0 / 0 00:04:13.990 00:04:13.990 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.990 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:13.990 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:13.990 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:13.990 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:13.990 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:13.990 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:13.990 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:13.990 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:13.990 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:13.990 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:04:13.990 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:13.990 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:13.990 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:13.990 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:13.990 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:13.990 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:13.990 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:13.991 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:13.991 10:31:42 -- spdk/autotest.sh@130 -- # uname -s 00:04:13.991 10:31:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:13.991 10:31:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:13.991 10:31:42 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:04:16.524 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:16.783 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.783 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.783 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.783 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.783 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.783 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.783 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.041 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.041 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:17.041 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:17.042 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:17.042 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:17.042 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:17.042 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:17.042 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:17.042 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.978 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:17.978 10:31:46 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:18.927 10:31:47 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:18.927 10:31:47 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:18.927 10:31:47 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.927 10:31:47 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:18.927 10:31:47 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:18.927 10:31:47 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:18.927 10:31:47 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.927 10:31:47 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:18.927 10:31:47 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:18.927 10:31:47 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:18.927 10:31:47 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 00:04:18.927 10:31:47 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.463 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:21.463 Waiting for block devices as requested 00:04:21.463 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:21.722 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:21.722 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:21.722 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:21.981 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:21.981 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:21.981 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:21.981 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:22.240 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:22.240 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:22.240 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:22.499 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:22.499 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:22.499 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:22.499 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:22.758 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:22.758 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:22.758 10:31:51 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:22.758 10:31:51 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:22.758 10:31:51 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:22.758 10:31:51 -- common/autotest_common.sh@1501 -- # grep 0000:5e:00.0/nvme/nvme 00:04:22.758 10:31:51 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:22.759 10:31:51 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:22.759 10:31:51 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:22.759 10:31:51 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:22.759 10:31:51 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:22.759 10:31:51 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:22.759 10:31:51 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:22.759 10:31:51 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:22.759 10:31:51 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:22.759 10:31:51 -- common/autotest_common.sh@1544 -- # oacs=' 0xf' 00:04:22.759 10:31:51 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:22.759 10:31:51 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:22.759 10:31:51 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:22.759 10:31:51 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:22.759 10:31:51 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:22.759 10:31:51 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:22.759 10:31:51 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:22.759 10:31:51 -- common/autotest_common.sh@1556 -- # continue 00:04:22.759 10:31:51 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:22.759 10:31:51 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:22.759 10:31:51 -- common/autotest_common.sh@10 -- # set +x 00:04:23.017 10:31:51 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:23.017 10:31:51 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:23.018 10:31:51 -- common/autotest_common.sh@10 -- # set +x 00:04:23.018 10:31:51 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:04:25.553 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:25.812 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.812 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.812 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.812 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.812 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.071 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.010 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.010 10:31:55 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:27.010 10:31:55 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:27.010 10:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:27.010 10:31:55 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:27.010 10:31:55 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:27.010 10:31:55 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:27.010 10:31:55 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:27.010 10:31:55 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:27.010 10:31:55 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:27.010 10:31:55 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:27.010 10:31:55 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:27.010 10:31:55 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.010 10:31:55 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:27.010 10:31:55 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:27.010 10:31:55 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:27.010 10:31:55 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 00:04:27.010 10:31:55 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:27.010 10:31:55 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:27.010 10:31:55 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:04:27.010 10:31:55 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:27.010 10:31:55 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:04:27.010 10:31:55 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:5e:00.0 00:04:27.010 10:31:55 -- common/autotest_common.sh@1591 -- # [[ -z 0000:5e:00.0 ]] 00:04:27.010 10:31:55 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=3878266 00:04:27.010 10:31:55 -- common/autotest_common.sh@1597 -- # waitforlisten 3878266 00:04:27.010 10:31:55 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.010 10:31:55 -- common/autotest_common.sh@830 -- # '[' -z 3878266 ']' 00:04:27.010 10:31:55 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.010 10:31:55 -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:27.010 10:31:55 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.010 10:31:55 -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:27.010 10:31:55 -- common/autotest_common.sh@10 -- # set +x 00:04:27.010 [2024-06-10 10:31:56.035954] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:27.010 [2024-06-10 10:31:56.036004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878266 ] 00:04:27.270 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.270 [2024-06-10 10:31:56.095614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.270 [2024-06-10 10:31:56.166261] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.837 10:31:56 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:27.837 10:31:56 -- common/autotest_common.sh@863 -- # return 0 00:04:27.837 10:31:56 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:04:27.837 10:31:56 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:04:27.837 10:31:56 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:31.124 nvme0n1 00:04:31.124 10:31:59 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:31.124 [2024-06-10 10:31:59.961050] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:31.124 [2024-06-10 10:31:59.961085] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:31.124 request: 00:04:31.124 { 00:04:31.124 "nvme_ctrlr_name": "nvme0", 00:04:31.124 "password": "test", 00:04:31.124 "method": "bdev_nvme_opal_revert", 00:04:31.124 "req_id": 1 00:04:31.124 } 00:04:31.124 Got JSON-RPC error response 00:04:31.124 response: 00:04:31.124 { 00:04:31.124 "code": -32603, 00:04:31.124 "message": "Internal error" 00:04:31.124 } 00:04:31.124 10:31:59 -- common/autotest_common.sh@1603 -- # true 00:04:31.124 10:31:59 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:04:31.124 10:31:59 -- common/autotest_common.sh@1607 -- # killprocess 3878266 00:04:31.124 10:31:59 -- common/autotest_common.sh@949 -- # '[' -z 3878266 ']' 00:04:31.124 10:31:59 -- common/autotest_common.sh@953 -- # kill -0 3878266 00:04:31.124 10:31:59 -- common/autotest_common.sh@954 -- # uname 00:04:31.124 10:31:59 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:31.124 10:31:59 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3878266 00:04:31.124 10:32:00 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:31.124 10:32:00 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:31.124 10:32:00 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3878266' 00:04:31.124 killing process with pid 3878266 00:04:31.124 10:32:00 -- common/autotest_common.sh@968 -- # kill 3878266 00:04:31.124 10:32:00 -- common/autotest_common.sh@973 -- # wait 3878266 00:04:33.093 10:32:01 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:33.093 10:32:01 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:33.093 10:32:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:33.093 10:32:01 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:33.093 10:32:01 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:33.093 10:32:01 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:33.093 10:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:33.093 10:32:01 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:33.093 10:32:01 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:04:33.093 10:32:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:33.093 10:32:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:33.093 10:32:01 -- common/autotest_common.sh@10 -- # set +x 00:04:33.093 ************************************ 00:04:33.093 START TEST env 00:04:33.093 ************************************ 00:04:33.093 10:32:01 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:04:33.093 * Looking for test storage... 00:04:33.093 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env 00:04:33.093 10:32:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:04:33.093 10:32:01 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:33.093 10:32:01 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:33.093 10:32:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.093 ************************************ 00:04:33.093 START TEST env_memory 00:04:33.093 ************************************ 00:04:33.093 10:32:01 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:04:33.093 00:04:33.093 00:04:33.093 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.093 http://cunit.sourceforge.net/ 00:04:33.093 00:04:33.093 00:04:33.093 Suite: memory 00:04:33.093 Test: alloc and free memory map ...[2024-06-10 10:32:01.815377] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:33.093 passed 00:04:33.093 Test: mem map translation ...[2024-06-10 10:32:01.833795] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:33.093 [2024-06-10 10:32:01.833812] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:33.093 [2024-06-10 10:32:01.833847] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:33.093 [2024-06-10 10:32:01.833853] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:33.093 passed 00:04:33.093 Test: mem map registration ...[2024-06-10 10:32:01.871593] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:33.093 [2024-06-10 10:32:01.871607] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:33.093 passed 00:04:33.094 Test: mem map adjacent registrations ...passed 00:04:33.094 00:04:33.094 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.094 suites 1 1 n/a 0 0 00:04:33.094 tests 4 4 4 0 0 00:04:33.094 asserts 152 152 152 0 n/a 00:04:33.094 00:04:33.094 Elapsed time = 0.132 seconds 00:04:33.094 00:04:33.094 real 0m0.139s 00:04:33.094 user 0m0.131s 00:04:33.094 sys 0m0.007s 00:04:33.094 10:32:01 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:33.094 10:32:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:33.094 ************************************ 00:04:33.094 END TEST env_memory 00:04:33.094 ************************************ 00:04:33.094 10:32:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:33.094 10:32:01 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:33.094 10:32:01 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:33.094 10:32:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.094 ************************************ 00:04:33.094 START TEST env_vtophys 00:04:33.094 ************************************ 00:04:33.094 10:32:01 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:33.094 EAL: lib.eal log level changed from notice to debug 00:04:33.094 EAL: Detected lcore 0 as core 0 on socket 0 00:04:33.094 EAL: Detected lcore 1 as core 1 on socket 0 00:04:33.094 EAL: Detected lcore 2 as core 2 on socket 0 00:04:33.094 EAL: Detected lcore 3 as core 3 on socket 0 00:04:33.094 EAL: Detected lcore 4 as core 4 on socket 0 00:04:33.094 EAL: Detected lcore 5 as core 5 on socket 0 00:04:33.094 EAL: Detected lcore 6 as core 6 on socket 0 00:04:33.094 EAL: Detected lcore 7 as core 8 on socket 0 00:04:33.094 EAL: Detected lcore 8 as core 9 on socket 0 00:04:33.094 EAL: Detected lcore 9 as core 10 on socket 0 00:04:33.094 EAL: Detected lcore 10 as core 11 on socket 0 00:04:33.094 EAL: Detected lcore 11 as core 12 on socket 0 00:04:33.094 EAL: Detected lcore 12 as core 13 on socket 0 00:04:33.094 EAL: Detected lcore 13 as core 16 on socket 0 00:04:33.094 EAL: Detected lcore 14 as core 17 on socket 0 00:04:33.094 EAL: Detected lcore 15 as core 18 on socket 0 00:04:33.094 EAL: Detected lcore 16 as core 19 on socket 0 00:04:33.094 EAL: Detected lcore 17 as core 20 on socket 0 00:04:33.094 EAL: Detected lcore 18 as core 21 on socket 0 00:04:33.094 EAL: Detected lcore 19 as core 25 on socket 0 00:04:33.094 EAL: Detected lcore 20 as core 26 on socket 0 00:04:33.094 EAL: Detected lcore 21 as core 27 on socket 0 00:04:33.094 EAL: Detected lcore 22 as core 28 on socket 0 00:04:33.094 EAL: Detected lcore 23 as core 29 on socket 0 00:04:33.094 EAL: Detected lcore 24 as core 0 on socket 1 00:04:33.094 EAL: Detected lcore 25 as core 1 on socket 1 00:04:33.094 EAL: Detected lcore 26 as core 2 on socket 1 00:04:33.094 EAL: Detected lcore 27 as core 3 on socket 1 00:04:33.094 EAL: Detected lcore 28 as core 4 on socket 1 00:04:33.094 EAL: Detected lcore 29 as core 5 on socket 1 00:04:33.094 EAL: Detected lcore 30 as core 6 on socket 1 00:04:33.094 EAL: Detected lcore 31 as core 8 on socket 1 00:04:33.094 EAL: Detected lcore 32 as core 9 on socket 1 00:04:33.094 EAL: Detected lcore 33 as core 10 on socket 1 00:04:33.094 EAL: Detected lcore 34 as core 11 on socket 1 00:04:33.094 EAL: Detected lcore 35 as core 12 on socket 1 00:04:33.094 EAL: Detected lcore 36 as core 13 on socket 1 00:04:33.094 EAL: Detected lcore 37 as core 16 on socket 1 00:04:33.094 EAL: Detected lcore 38 as core 17 on socket 1 00:04:33.094 EAL: Detected lcore 39 as core 18 on socket 1 00:04:33.094 EAL: Detected lcore 40 as core 19 on socket 1 00:04:33.094 EAL: Detected lcore 41 as core 20 on socket 1 00:04:33.094 EAL: Detected lcore 42 as core 21 on socket 1 00:04:33.094 EAL: Detected lcore 43 as core 25 on socket 1 00:04:33.094 EAL: Detected lcore 44 as core 26 on socket 1 00:04:33.094 EAL: Detected lcore 45 as core 27 on socket 1 00:04:33.094 EAL: Detected lcore 46 as core 28 on socket 1 00:04:33.094 EAL: Detected lcore 47 as core 29 on socket 1 00:04:33.094 EAL: Detected lcore 48 as core 0 on socket 0 00:04:33.094 EAL: Detected lcore 49 as core 1 on socket 0 00:04:33.094 EAL: Detected lcore 50 as core 2 on socket 0 00:04:33.094 EAL: Detected lcore 51 as core 3 on socket 0 00:04:33.094 EAL: Detected lcore 52 as core 4 on socket 0 00:04:33.094 EAL: Detected lcore 53 as core 5 on socket 0 00:04:33.094 EAL: Detected lcore 54 as core 6 on socket 0 00:04:33.094 EAL: Detected lcore 55 as core 8 on socket 0 00:04:33.094 EAL: Detected lcore 56 as core 9 on socket 0 00:04:33.094 EAL: Detected lcore 57 as core 10 on socket 0 00:04:33.094 EAL: Detected lcore 58 as core 11 on socket 0 00:04:33.094 EAL: Detected lcore 59 as core 12 on socket 0 00:04:33.094 EAL: Detected lcore 60 as core 13 on socket 0 00:04:33.094 EAL: Detected lcore 61 as core 16 on socket 0 00:04:33.094 EAL: Detected lcore 62 as core 17 on socket 0 00:04:33.094 EAL: Detected lcore 63 as core 18 on socket 0 00:04:33.094 EAL: Detected lcore 64 as core 19 on socket 0 00:04:33.094 EAL: Detected lcore 65 as core 20 on socket 0 00:04:33.094 EAL: Detected lcore 66 as core 21 on socket 0 00:04:33.094 EAL: Detected lcore 67 as core 25 on socket 0 00:04:33.094 EAL: Detected lcore 68 as core 26 on socket 0 00:04:33.094 EAL: Detected lcore 69 as core 27 on socket 0 00:04:33.094 EAL: Detected lcore 70 as core 28 on socket 0 00:04:33.094 EAL: Detected lcore 71 as core 29 on socket 0 00:04:33.094 EAL: Detected lcore 72 as core 0 on socket 1 00:04:33.094 EAL: Detected lcore 73 as core 1 on socket 1 00:04:33.095 EAL: Detected lcore 74 as core 2 on socket 1 00:04:33.095 EAL: Detected lcore 75 as core 3 on socket 1 00:04:33.095 EAL: Detected lcore 76 as core 4 on socket 1 00:04:33.095 EAL: Detected lcore 77 as core 5 on socket 1 00:04:33.095 EAL: Detected lcore 78 as core 6 on socket 1 00:04:33.095 EAL: Detected lcore 79 as core 8 on socket 1 00:04:33.095 EAL: Detected lcore 80 as core 9 on socket 1 00:04:33.095 EAL: Detected lcore 81 as core 10 on socket 1 00:04:33.095 EAL: Detected lcore 82 as core 11 on socket 1 00:04:33.095 EAL: Detected lcore 83 as core 12 on socket 1 00:04:33.095 EAL: Detected lcore 84 as core 13 on socket 1 00:04:33.095 EAL: Detected lcore 85 as core 16 on socket 1 00:04:33.095 EAL: Detected lcore 86 as core 17 on socket 1 00:04:33.095 EAL: Detected lcore 87 as core 18 on socket 1 00:04:33.095 EAL: Detected lcore 88 as core 19 on socket 1 00:04:33.095 EAL: Detected lcore 89 as core 20 on socket 1 00:04:33.095 EAL: Detected lcore 90 as core 21 on socket 1 00:04:33.095 EAL: Detected lcore 91 as core 25 on socket 1 00:04:33.095 EAL: Detected lcore 92 as core 26 on socket 1 00:04:33.095 EAL: Detected lcore 93 as core 27 on socket 1 00:04:33.095 EAL: Detected lcore 94 as core 28 on socket 1 00:04:33.095 EAL: Detected lcore 95 as core 29 on socket 1 00:04:33.095 EAL: Maximum logical cores by configuration: 128 00:04:33.095 EAL: Detected CPU lcores: 96 00:04:33.095 EAL: Detected NUMA nodes: 2 00:04:33.095 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:33.095 EAL: Detected shared linkage of DPDK 00:04:33.095 EAL: No shared files mode enabled, IPC will be disabled 00:04:33.095 EAL: Bus pci wants IOVA as 'DC' 00:04:33.095 EAL: Buses did not request a specific IOVA mode. 00:04:33.095 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:33.095 EAL: Selected IOVA mode 'VA' 00:04:33.095 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.095 EAL: Probing VFIO support... 00:04:33.095 EAL: IOMMU type 1 (Type 1) is supported 00:04:33.095 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:33.095 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:33.095 EAL: VFIO support initialized 00:04:33.095 EAL: Ask a virtual area of 0x2e000 bytes 00:04:33.095 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:33.095 EAL: Setting up physically contiguous memory... 00:04:33.095 EAL: Setting maximum number of open files to 524288 00:04:33.095 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:33.095 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:33.095 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:33.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.095 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:33.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.095 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:33.095 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:33.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.095 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:33.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.095 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:33.095 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:33.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.095 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:33.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.095 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:33.095 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:33.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.095 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:33.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.095 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:33.095 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:33.095 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:33.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.095 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:33.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.095 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:33.095 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:33.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.095 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:33.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.095 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:33.095 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:33.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.095 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:33.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.095 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:33.095 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:33.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.096 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:33.096 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:33.096 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.096 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:33.096 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:33.096 EAL: Hugepages will be freed exactly as allocated. 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: TSC frequency is ~2100000 KHz 00:04:33.096 EAL: Main lcore 0 is ready (tid=7fdaf3d4da00;cpuset=[0]) 00:04:33.096 EAL: Trying to obtain current memory policy. 00:04:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.096 EAL: Restoring previous memory policy: 0 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.096 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.096 00:04:33.096 00:04:33.096 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.096 http://cunit.sourceforge.net/ 00:04:33.096 00:04:33.096 00:04:33.096 Suite: components_suite 00:04:33.096 Test: vtophys_malloc_test ...passed 00:04:33.096 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.096 EAL: Restoring previous memory policy: 4 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was expanded by 4MB 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was shrunk by 4MB 00:04:33.096 EAL: Trying to obtain current memory policy. 00:04:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.096 EAL: Restoring previous memory policy: 4 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was expanded by 6MB 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was shrunk by 6MB 00:04:33.096 EAL: Trying to obtain current memory policy. 00:04:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.096 EAL: Restoring previous memory policy: 4 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was expanded by 10MB 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was shrunk by 10MB 00:04:33.096 EAL: Trying to obtain current memory policy. 00:04:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.096 EAL: Restoring previous memory policy: 4 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was expanded by 18MB 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was shrunk by 18MB 00:04:33.096 EAL: Trying to obtain current memory policy. 00:04:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.096 EAL: Restoring previous memory policy: 4 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was expanded by 34MB 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was shrunk by 34MB 00:04:33.096 EAL: Trying to obtain current memory policy. 00:04:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.096 EAL: Restoring previous memory policy: 4 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was expanded by 66MB 00:04:33.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.096 EAL: request: mp_malloc_sync 00:04:33.096 EAL: No shared files mode enabled, IPC is disabled 00:04:33.096 EAL: Heap on socket 0 was shrunk by 66MB 00:04:33.096 EAL: Trying to obtain current memory policy. 00:04:33.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.354 EAL: Restoring previous memory policy: 4 00:04:33.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.354 EAL: request: mp_malloc_sync 00:04:33.354 EAL: No shared files mode enabled, IPC is disabled 00:04:33.354 EAL: Heap on socket 0 was expanded by 130MB 00:04:33.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.354 EAL: request: mp_malloc_sync 00:04:33.354 EAL: No shared files mode enabled, IPC is disabled 00:04:33.354 EAL: Heap on socket 0 was shrunk by 130MB 00:04:33.354 EAL: Trying to obtain current memory policy. 00:04:33.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.354 EAL: Restoring previous memory policy: 4 00:04:33.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.354 EAL: request: mp_malloc_sync 00:04:33.354 EAL: No shared files mode enabled, IPC is disabled 00:04:33.354 EAL: Heap on socket 0 was expanded by 258MB 00:04:33.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.354 EAL: request: mp_malloc_sync 00:04:33.354 EAL: No shared files mode enabled, IPC is disabled 00:04:33.354 EAL: Heap on socket 0 was shrunk by 258MB 00:04:33.355 EAL: Trying to obtain current memory policy. 00:04:33.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.613 EAL: Restoring previous memory policy: 4 00:04:33.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.613 EAL: request: mp_malloc_sync 00:04:33.613 EAL: No shared files mode enabled, IPC is disabled 00:04:33.613 EAL: Heap on socket 0 was expanded by 514MB 00:04:33.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.613 EAL: request: mp_malloc_sync 00:04:33.613 EAL: No shared files mode enabled, IPC is disabled 00:04:33.613 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.613 EAL: Trying to obtain current memory policy. 00:04:33.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.872 EAL: Restoring previous memory policy: 4 00:04:33.872 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.872 EAL: request: mp_malloc_sync 00:04:33.872 EAL: No shared files mode enabled, IPC is disabled 00:04:33.872 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.129 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.129 EAL: request: mp_malloc_sync 00:04:34.129 EAL: No shared files mode enabled, IPC is disabled 00:04:34.129 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:34.129 passed 00:04:34.129 00:04:34.129 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.129 suites 1 1 n/a 0 0 00:04:34.129 tests 2 2 2 0 0 00:04:34.129 asserts 497 497 497 0 n/a 00:04:34.129 00:04:34.129 Elapsed time = 0.957 seconds 00:04:34.129 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.129 EAL: request: mp_malloc_sync 00:04:34.129 EAL: No shared files mode enabled, IPC is disabled 00:04:34.129 EAL: Heap on socket 0 was shrunk by 2MB 00:04:34.130 EAL: No shared files mode enabled, IPC is disabled 00:04:34.130 EAL: No shared files mode enabled, IPC is disabled 00:04:34.130 EAL: No shared files mode enabled, IPC is disabled 00:04:34.130 00:04:34.130 real 0m1.080s 00:04:34.130 user 0m0.620s 00:04:34.130 sys 0m0.423s 00:04:34.130 10:32:03 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.130 10:32:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:34.130 ************************************ 00:04:34.130 END TEST env_vtophys 00:04:34.130 ************************************ 00:04:34.130 10:32:03 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:34.130 10:32:03 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:34.130 10:32:03 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:34.130 10:32:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.130 ************************************ 00:04:34.130 START TEST env_pci 00:04:34.130 ************************************ 00:04:34.130 10:32:03 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:34.130 00:04:34.130 00:04:34.130 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.130 http://cunit.sourceforge.net/ 00:04:34.130 00:04:34.130 00:04:34.130 Suite: pci 00:04:34.130 Test: pci_hook ...[2024-06-10 10:32:03.144945] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3879566 has claimed it 00:04:34.388 EAL: Cannot find device (10000:00:01.0) 00:04:34.388 EAL: Failed to attach device on primary process 00:04:34.388 passed 00:04:34.388 00:04:34.388 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.388 suites 1 1 n/a 0 0 00:04:34.388 tests 1 1 1 0 0 00:04:34.388 asserts 25 25 25 0 n/a 00:04:34.388 00:04:34.388 Elapsed time = 0.027 seconds 00:04:34.388 00:04:34.388 real 0m0.046s 00:04:34.388 user 0m0.017s 00:04:34.388 sys 0m0.029s 00:04:34.388 10:32:03 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.388 10:32:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:34.388 ************************************ 00:04:34.388 END TEST env_pci 00:04:34.388 ************************************ 00:04:34.388 10:32:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:34.388 10:32:03 env -- env/env.sh@15 -- # uname 00:04:34.388 10:32:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:34.388 10:32:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:34.388 10:32:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.388 10:32:03 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:34.388 10:32:03 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:34.388 10:32:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.388 ************************************ 00:04:34.388 START TEST env_dpdk_post_init 00:04:34.388 ************************************ 00:04:34.388 10:32:03 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:34.388 EAL: Detected CPU lcores: 96 00:04:34.388 EAL: Detected NUMA nodes: 2 00:04:34.388 EAL: Detected shared linkage of DPDK 00:04:34.388 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:34.388 EAL: Selected IOVA mode 'VA' 00:04:34.388 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.388 EAL: VFIO support initialized 00:04:34.388 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:34.388 EAL: Using IOMMU type 1 (Type 1) 00:04:34.388 EAL: Ignore mapping IO port bar(1) 00:04:34.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:34.388 EAL: Ignore mapping IO port bar(1) 00:04:34.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:34.388 EAL: Ignore mapping IO port bar(1) 00:04:34.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:34.388 EAL: Ignore mapping IO port bar(1) 00:04:34.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:34.388 EAL: Ignore mapping IO port bar(1) 00:04:34.388 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:34.646 EAL: Ignore mapping IO port bar(1) 00:04:34.646 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:34.646 EAL: Ignore mapping IO port bar(1) 00:04:34.646 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:34.646 EAL: Ignore mapping IO port bar(1) 00:04:34.646 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:35.215 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:35.215 EAL: Ignore mapping IO port bar(1) 00:04:35.215 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:35.215 EAL: Ignore mapping IO port bar(1) 00:04:35.215 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:35.215 EAL: Ignore mapping IO port bar(1) 00:04:35.215 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:35.215 EAL: Ignore mapping IO port bar(1) 00:04:35.215 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:35.473 EAL: Ignore mapping IO port bar(1) 00:04:35.473 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:35.473 EAL: Ignore mapping IO port bar(1) 00:04:35.473 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:35.473 EAL: Ignore mapping IO port bar(1) 00:04:35.473 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:35.473 EAL: Ignore mapping IO port bar(1) 00:04:35.473 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:38.760 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:38.760 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:38.760 Starting DPDK initialization... 00:04:38.760 Starting SPDK post initialization... 00:04:38.760 SPDK NVMe probe 00:04:38.760 Attaching to 0000:5e:00.0 00:04:38.760 Attached to 0000:5e:00.0 00:04:38.760 Cleaning up... 00:04:38.760 00:04:38.760 real 0m4.314s 00:04:38.760 user 0m3.254s 00:04:38.760 sys 0m0.128s 00:04:38.760 10:32:07 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:38.760 10:32:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.760 ************************************ 00:04:38.760 END TEST env_dpdk_post_init 00:04:38.760 ************************************ 00:04:38.760 10:32:07 env -- env/env.sh@26 -- # uname 00:04:38.760 10:32:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.760 10:32:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.760 10:32:07 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:38.760 10:32:07 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:38.760 10:32:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.760 ************************************ 00:04:38.760 START TEST env_mem_callbacks 00:04:38.760 ************************************ 00:04:38.760 10:32:07 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.760 EAL: Detected CPU lcores: 96 00:04:38.760 EAL: Detected NUMA nodes: 2 00:04:38.760 EAL: Detected shared linkage of DPDK 00:04:38.760 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.760 EAL: Selected IOVA mode 'VA' 00:04:38.760 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.760 EAL: VFIO support initialized 00:04:38.760 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.760 00:04:38.760 00:04:38.760 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.760 http://cunit.sourceforge.net/ 00:04:38.760 00:04:38.760 00:04:38.760 Suite: memory 00:04:38.760 Test: test ... 00:04:38.760 register 0x200000200000 2097152 00:04:38.760 malloc 3145728 00:04:38.760 register 0x200000400000 4194304 00:04:38.760 buf 0x200000500000 len 3145728 PASSED 00:04:38.760 malloc 64 00:04:38.760 buf 0x2000004fff40 len 64 PASSED 00:04:38.760 malloc 4194304 00:04:38.760 register 0x200000800000 6291456 00:04:38.760 buf 0x200000a00000 len 4194304 PASSED 00:04:38.760 free 0x200000500000 3145728 00:04:38.760 free 0x2000004fff40 64 00:04:38.760 unregister 0x200000400000 4194304 PASSED 00:04:38.760 free 0x200000a00000 4194304 00:04:38.760 unregister 0x200000800000 6291456 PASSED 00:04:38.760 malloc 8388608 00:04:38.760 register 0x200000400000 10485760 00:04:38.760 buf 0x200000600000 len 8388608 PASSED 00:04:38.760 free 0x200000600000 8388608 00:04:38.760 unregister 0x200000400000 10485760 PASSED 00:04:38.760 passed 00:04:38.760 00:04:38.760 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.760 suites 1 1 n/a 0 0 00:04:38.760 tests 1 1 1 0 0 00:04:38.760 asserts 15 15 15 0 n/a 00:04:38.760 00:04:38.760 Elapsed time = 0.005 seconds 00:04:38.760 00:04:38.760 real 0m0.055s 00:04:38.760 user 0m0.016s 00:04:38.760 sys 0m0.039s 00:04:38.760 10:32:07 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:38.761 10:32:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.761 ************************************ 00:04:38.761 END TEST env_mem_callbacks 00:04:38.761 ************************************ 00:04:38.761 00:04:38.761 real 0m6.051s 00:04:38.761 user 0m4.220s 00:04:38.761 sys 0m0.890s 00:04:38.761 10:32:07 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:38.761 10:32:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.761 ************************************ 00:04:38.761 END TEST env 00:04:38.761 ************************************ 00:04:38.761 10:32:07 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.761 10:32:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:38.761 10:32:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:38.761 10:32:07 -- common/autotest_common.sh@10 -- # set +x 00:04:38.761 ************************************ 00:04:38.761 START TEST rpc 00:04:38.761 ************************************ 00:04:38.761 10:32:07 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:39.020 * Looking for test storage... 00:04:39.020 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:39.020 10:32:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3880381 00:04:39.020 10:32:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.020 10:32:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:39.020 10:32:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3880381 00:04:39.020 10:32:07 rpc -- common/autotest_common.sh@830 -- # '[' -z 3880381 ']' 00:04:39.020 10:32:07 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.020 10:32:07 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:39.020 10:32:07 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.020 10:32:07 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:39.020 10:32:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.020 [2024-06-10 10:32:07.924625] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:39.020 [2024-06-10 10:32:07.924671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880381 ] 00:04:39.020 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.020 [2024-06-10 10:32:07.985614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.279 [2024-06-10 10:32:08.063770] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.279 [2024-06-10 10:32:08.063806] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3880381' to capture a snapshot of events at runtime. 00:04:39.279 [2024-06-10 10:32:08.063813] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:39.279 [2024-06-10 10:32:08.063819] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:39.279 [2024-06-10 10:32:08.063824] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3880381 for offline analysis/debug. 00:04:39.279 [2024-06-10 10:32:08.063840] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.848 10:32:08 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:39.848 10:32:08 rpc -- common/autotest_common.sh@863 -- # return 0 00:04:39.848 10:32:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:39.848 10:32:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:39.848 10:32:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.848 10:32:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.848 10:32:08 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:39.848 10:32:08 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:39.848 10:32:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.848 ************************************ 00:04:39.848 START TEST rpc_integrity 00:04:39.848 ************************************ 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.848 { 00:04:39.848 "name": "Malloc0", 00:04:39.848 "aliases": [ 00:04:39.848 "79871ac9-be0a-484e-831e-d59c433da530" 00:04:39.848 ], 00:04:39.848 "product_name": "Malloc disk", 00:04:39.848 "block_size": 512, 00:04:39.848 "num_blocks": 16384, 00:04:39.848 "uuid": "79871ac9-be0a-484e-831e-d59c433da530", 00:04:39.848 "assigned_rate_limits": { 00:04:39.848 "rw_ios_per_sec": 0, 00:04:39.848 "rw_mbytes_per_sec": 0, 00:04:39.848 "r_mbytes_per_sec": 0, 00:04:39.848 "w_mbytes_per_sec": 0 00:04:39.848 }, 00:04:39.848 "claimed": false, 00:04:39.848 "zoned": false, 00:04:39.848 "supported_io_types": { 00:04:39.848 "read": true, 00:04:39.848 "write": true, 00:04:39.848 "unmap": true, 00:04:39.848 "write_zeroes": true, 00:04:39.848 "flush": true, 00:04:39.848 "reset": true, 00:04:39.848 "compare": false, 00:04:39.848 "compare_and_write": false, 00:04:39.848 "abort": true, 00:04:39.848 "nvme_admin": false, 00:04:39.848 "nvme_io": false 00:04:39.848 }, 00:04:39.848 "memory_domains": [ 00:04:39.848 { 00:04:39.848 "dma_device_id": "system", 00:04:39.848 "dma_device_type": 1 00:04:39.848 }, 00:04:39.848 { 00:04:39.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.848 "dma_device_type": 2 00:04:39.848 } 00:04:39.848 ], 00:04:39.848 "driver_specific": {} 00:04:39.848 } 00:04:39.848 ]' 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.848 [2024-06-10 10:32:08.869734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.848 [2024-06-10 10:32:08.869762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.848 [2024-06-10 10:32:08.869774] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f536b0 00:04:39.848 [2024-06-10 10:32:08.869780] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.848 [2024-06-10 10:32:08.870763] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.848 [2024-06-10 10:32:08.870783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.848 Passthru0 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:39.848 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:39.848 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.108 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.108 { 00:04:40.108 "name": "Malloc0", 00:04:40.108 "aliases": [ 00:04:40.108 "79871ac9-be0a-484e-831e-d59c433da530" 00:04:40.108 ], 00:04:40.108 "product_name": "Malloc disk", 00:04:40.108 "block_size": 512, 00:04:40.108 "num_blocks": 16384, 00:04:40.108 "uuid": "79871ac9-be0a-484e-831e-d59c433da530", 00:04:40.108 "assigned_rate_limits": { 00:04:40.108 "rw_ios_per_sec": 0, 00:04:40.108 "rw_mbytes_per_sec": 0, 00:04:40.108 "r_mbytes_per_sec": 0, 00:04:40.108 "w_mbytes_per_sec": 0 00:04:40.108 }, 00:04:40.108 "claimed": true, 00:04:40.108 "claim_type": "exclusive_write", 00:04:40.108 "zoned": false, 00:04:40.108 "supported_io_types": { 00:04:40.108 "read": true, 00:04:40.108 "write": true, 00:04:40.108 "unmap": true, 00:04:40.108 "write_zeroes": true, 00:04:40.108 "flush": true, 00:04:40.108 "reset": true, 00:04:40.108 "compare": false, 00:04:40.108 "compare_and_write": false, 00:04:40.108 "abort": true, 00:04:40.108 "nvme_admin": false, 00:04:40.108 "nvme_io": false 00:04:40.108 }, 00:04:40.108 "memory_domains": [ 00:04:40.108 { 00:04:40.108 "dma_device_id": "system", 00:04:40.108 "dma_device_type": 1 00:04:40.108 }, 00:04:40.108 { 00:04:40.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.108 "dma_device_type": 2 00:04:40.108 } 00:04:40.108 ], 00:04:40.108 "driver_specific": {} 00:04:40.108 }, 00:04:40.108 { 00:04:40.108 "name": "Passthru0", 00:04:40.108 "aliases": [ 00:04:40.108 "9daf5dcb-a2f2-5b4c-bceb-47ee0f5b1b06" 00:04:40.108 ], 00:04:40.108 "product_name": "passthru", 00:04:40.108 "block_size": 512, 00:04:40.108 "num_blocks": 16384, 00:04:40.108 "uuid": "9daf5dcb-a2f2-5b4c-bceb-47ee0f5b1b06", 00:04:40.108 "assigned_rate_limits": { 00:04:40.108 "rw_ios_per_sec": 0, 00:04:40.108 "rw_mbytes_per_sec": 0, 00:04:40.108 "r_mbytes_per_sec": 0, 00:04:40.108 "w_mbytes_per_sec": 0 00:04:40.108 }, 00:04:40.108 "claimed": false, 00:04:40.108 "zoned": false, 00:04:40.108 "supported_io_types": { 00:04:40.108 "read": true, 00:04:40.108 "write": true, 00:04:40.108 "unmap": true, 00:04:40.108 "write_zeroes": true, 00:04:40.108 "flush": true, 00:04:40.108 "reset": true, 00:04:40.108 "compare": false, 00:04:40.108 "compare_and_write": false, 00:04:40.108 "abort": true, 00:04:40.108 "nvme_admin": false, 00:04:40.108 "nvme_io": false 00:04:40.108 }, 00:04:40.108 "memory_domains": [ 00:04:40.108 { 00:04:40.108 "dma_device_id": "system", 00:04:40.108 "dma_device_type": 1 00:04:40.108 }, 00:04:40.108 { 00:04:40.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.108 "dma_device_type": 2 00:04:40.108 } 00:04:40.108 ], 00:04:40.108 "driver_specific": { 00:04:40.108 "passthru": { 00:04:40.108 "name": "Passthru0", 00:04:40.108 "base_bdev_name": "Malloc0" 00:04:40.108 } 00:04:40.108 } 00:04:40.108 } 00:04:40.108 ]' 00:04:40.108 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.108 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.108 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.108 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.108 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.108 10:32:08 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.108 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.108 10:32:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.108 10:32:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.108 00:04:40.108 real 0m0.278s 00:04:40.108 user 0m0.175s 00:04:40.108 sys 0m0.033s 00:04:40.108 10:32:09 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.108 10:32:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.108 ************************************ 00:04:40.108 END TEST rpc_integrity 00:04:40.108 ************************************ 00:04:40.108 10:32:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.108 10:32:09 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:40.108 10:32:09 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:40.108 10:32:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.108 ************************************ 00:04:40.108 START TEST rpc_plugins 00:04:40.108 ************************************ 00:04:40.108 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:04:40.108 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.108 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.108 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.108 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.108 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.108 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.108 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.108 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.108 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.108 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.108 { 00:04:40.108 "name": "Malloc1", 00:04:40.108 "aliases": [ 00:04:40.108 "fc4e9285-3554-4312-b65b-f2891bd20e76" 00:04:40.108 ], 00:04:40.108 "product_name": "Malloc disk", 00:04:40.108 "block_size": 4096, 00:04:40.108 "num_blocks": 256, 00:04:40.108 "uuid": "fc4e9285-3554-4312-b65b-f2891bd20e76", 00:04:40.108 "assigned_rate_limits": { 00:04:40.108 "rw_ios_per_sec": 0, 00:04:40.108 "rw_mbytes_per_sec": 0, 00:04:40.108 "r_mbytes_per_sec": 0, 00:04:40.108 "w_mbytes_per_sec": 0 00:04:40.108 }, 00:04:40.108 "claimed": false, 00:04:40.108 "zoned": false, 00:04:40.108 "supported_io_types": { 00:04:40.108 "read": true, 00:04:40.108 "write": true, 00:04:40.108 "unmap": true, 00:04:40.108 "write_zeroes": true, 00:04:40.108 "flush": true, 00:04:40.108 "reset": true, 00:04:40.108 "compare": false, 00:04:40.108 "compare_and_write": false, 00:04:40.108 "abort": true, 00:04:40.108 "nvme_admin": false, 00:04:40.108 "nvme_io": false 00:04:40.108 }, 00:04:40.108 "memory_domains": [ 00:04:40.108 { 00:04:40.108 "dma_device_id": "system", 00:04:40.108 "dma_device_type": 1 00:04:40.108 }, 00:04:40.108 { 00:04:40.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.108 "dma_device_type": 2 00:04:40.108 } 00:04:40.108 ], 00:04:40.108 "driver_specific": {} 00:04:40.108 } 00:04:40.108 ]' 00:04:40.108 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.367 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.367 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.367 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.367 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.367 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.367 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.367 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.367 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.367 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.367 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.367 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.367 10:32:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.367 00:04:40.367 real 0m0.133s 00:04:40.367 user 0m0.087s 00:04:40.367 sys 0m0.011s 00:04:40.367 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.367 10:32:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.367 ************************************ 00:04:40.367 END TEST rpc_plugins 00:04:40.367 ************************************ 00:04:40.367 10:32:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.367 10:32:09 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:40.367 10:32:09 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:40.367 10:32:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.367 ************************************ 00:04:40.367 START TEST rpc_trace_cmd_test 00:04:40.367 ************************************ 00:04:40.367 10:32:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:04:40.367 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.367 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.367 10:32:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.367 10:32:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.367 10:32:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.367 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:40.367 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3880381", 00:04:40.367 "tpoint_group_mask": "0x8", 00:04:40.367 "iscsi_conn": { 00:04:40.367 "mask": "0x2", 00:04:40.367 "tpoint_mask": "0x0" 00:04:40.367 }, 00:04:40.367 "scsi": { 00:04:40.367 "mask": "0x4", 00:04:40.367 "tpoint_mask": "0x0" 00:04:40.367 }, 00:04:40.367 "bdev": { 00:04:40.368 "mask": "0x8", 00:04:40.368 "tpoint_mask": "0xffffffffffffffff" 00:04:40.368 }, 00:04:40.368 "nvmf_rdma": { 00:04:40.368 "mask": "0x10", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "nvmf_tcp": { 00:04:40.368 "mask": "0x20", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "ftl": { 00:04:40.368 "mask": "0x40", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "blobfs": { 00:04:40.368 "mask": "0x80", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "dsa": { 00:04:40.368 "mask": "0x200", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "thread": { 00:04:40.368 "mask": "0x400", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "nvme_pcie": { 00:04:40.368 "mask": "0x800", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "iaa": { 00:04:40.368 "mask": "0x1000", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "nvme_tcp": { 00:04:40.368 "mask": "0x2000", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "bdev_nvme": { 00:04:40.368 "mask": "0x4000", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 }, 00:04:40.368 "sock": { 00:04:40.368 "mask": "0x8000", 00:04:40.368 "tpoint_mask": "0x0" 00:04:40.368 } 00:04:40.368 }' 00:04:40.368 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:40.368 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:40.368 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:40.368 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:40.368 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:40.626 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:40.626 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:40.626 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:40.626 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:40.626 10:32:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:40.626 00:04:40.626 real 0m0.223s 00:04:40.626 user 0m0.193s 00:04:40.626 sys 0m0.022s 00:04:40.626 10:32:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.626 10:32:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.626 ************************************ 00:04:40.626 END TEST rpc_trace_cmd_test 00:04:40.626 ************************************ 00:04:40.626 10:32:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:40.626 10:32:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:40.626 10:32:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:40.626 10:32:09 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:40.626 10:32:09 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:40.626 10:32:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.626 ************************************ 00:04:40.626 START TEST rpc_daemon_integrity 00:04:40.626 ************************************ 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.626 { 00:04:40.626 "name": "Malloc2", 00:04:40.626 "aliases": [ 00:04:40.626 "7381354c-a30a-404c-8d26-d22fcf8cc1da" 00:04:40.626 ], 00:04:40.626 "product_name": "Malloc disk", 00:04:40.626 "block_size": 512, 00:04:40.626 "num_blocks": 16384, 00:04:40.626 "uuid": "7381354c-a30a-404c-8d26-d22fcf8cc1da", 00:04:40.626 "assigned_rate_limits": { 00:04:40.626 "rw_ios_per_sec": 0, 00:04:40.626 "rw_mbytes_per_sec": 0, 00:04:40.626 "r_mbytes_per_sec": 0, 00:04:40.626 "w_mbytes_per_sec": 0 00:04:40.626 }, 00:04:40.626 "claimed": false, 00:04:40.626 "zoned": false, 00:04:40.626 "supported_io_types": { 00:04:40.626 "read": true, 00:04:40.626 "write": true, 00:04:40.626 "unmap": true, 00:04:40.626 "write_zeroes": true, 00:04:40.626 "flush": true, 00:04:40.626 "reset": true, 00:04:40.626 "compare": false, 00:04:40.626 "compare_and_write": false, 00:04:40.626 "abort": true, 00:04:40.626 "nvme_admin": false, 00:04:40.626 "nvme_io": false 00:04:40.626 }, 00:04:40.626 "memory_domains": [ 00:04:40.626 { 00:04:40.626 "dma_device_id": "system", 00:04:40.626 "dma_device_type": 1 00:04:40.626 }, 00:04:40.626 { 00:04:40.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.626 "dma_device_type": 2 00:04:40.626 } 00:04:40.626 ], 00:04:40.626 "driver_specific": {} 00:04:40.626 } 00:04:40.626 ]' 00:04:40.626 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.885 [2024-06-10 10:32:09.699987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.885 [2024-06-10 10:32:09.700014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.885 [2024-06-10 10:32:09.700025] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff1110 00:04:40.885 [2024-06-10 10:32:09.700031] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.885 [2024-06-10 10:32:09.700950] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.885 [2024-06-10 10:32:09.700978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.885 Passthru0 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.885 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.885 { 00:04:40.885 "name": "Malloc2", 00:04:40.885 "aliases": [ 00:04:40.885 "7381354c-a30a-404c-8d26-d22fcf8cc1da" 00:04:40.885 ], 00:04:40.885 "product_name": "Malloc disk", 00:04:40.885 "block_size": 512, 00:04:40.885 "num_blocks": 16384, 00:04:40.885 "uuid": "7381354c-a30a-404c-8d26-d22fcf8cc1da", 00:04:40.885 "assigned_rate_limits": { 00:04:40.885 "rw_ios_per_sec": 0, 00:04:40.885 "rw_mbytes_per_sec": 0, 00:04:40.885 "r_mbytes_per_sec": 0, 00:04:40.885 "w_mbytes_per_sec": 0 00:04:40.885 }, 00:04:40.885 "claimed": true, 00:04:40.885 "claim_type": "exclusive_write", 00:04:40.885 "zoned": false, 00:04:40.885 "supported_io_types": { 00:04:40.885 "read": true, 00:04:40.885 "write": true, 00:04:40.885 "unmap": true, 00:04:40.885 "write_zeroes": true, 00:04:40.885 "flush": true, 00:04:40.885 "reset": true, 00:04:40.885 "compare": false, 00:04:40.885 "compare_and_write": false, 00:04:40.885 "abort": true, 00:04:40.885 "nvme_admin": false, 00:04:40.885 "nvme_io": false 00:04:40.885 }, 00:04:40.885 "memory_domains": [ 00:04:40.885 { 00:04:40.885 "dma_device_id": "system", 00:04:40.885 "dma_device_type": 1 00:04:40.885 }, 00:04:40.885 { 00:04:40.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.885 "dma_device_type": 2 00:04:40.885 } 00:04:40.886 ], 00:04:40.886 "driver_specific": {} 00:04:40.886 }, 00:04:40.886 { 00:04:40.886 "name": "Passthru0", 00:04:40.886 "aliases": [ 00:04:40.886 "a7b1abfe-1f8c-5bde-aa09-682948888814" 00:04:40.886 ], 00:04:40.886 "product_name": "passthru", 00:04:40.886 "block_size": 512, 00:04:40.886 "num_blocks": 16384, 00:04:40.886 "uuid": "a7b1abfe-1f8c-5bde-aa09-682948888814", 00:04:40.886 "assigned_rate_limits": { 00:04:40.886 "rw_ios_per_sec": 0, 00:04:40.886 "rw_mbytes_per_sec": 0, 00:04:40.886 "r_mbytes_per_sec": 0, 00:04:40.886 "w_mbytes_per_sec": 0 00:04:40.886 }, 00:04:40.886 "claimed": false, 00:04:40.886 "zoned": false, 00:04:40.886 "supported_io_types": { 00:04:40.886 "read": true, 00:04:40.886 "write": true, 00:04:40.886 "unmap": true, 00:04:40.886 "write_zeroes": true, 00:04:40.886 "flush": true, 00:04:40.886 "reset": true, 00:04:40.886 "compare": false, 00:04:40.886 "compare_and_write": false, 00:04:40.886 "abort": true, 00:04:40.886 "nvme_admin": false, 00:04:40.886 "nvme_io": false 00:04:40.886 }, 00:04:40.886 "memory_domains": [ 00:04:40.886 { 00:04:40.886 "dma_device_id": "system", 00:04:40.886 "dma_device_type": 1 00:04:40.886 }, 00:04:40.886 { 00:04:40.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.886 "dma_device_type": 2 00:04:40.886 } 00:04:40.886 ], 00:04:40.886 "driver_specific": { 00:04:40.886 "passthru": { 00:04:40.886 "name": "Passthru0", 00:04:40.886 "base_bdev_name": "Malloc2" 00:04:40.886 } 00:04:40.886 } 00:04:40.886 } 00:04:40.886 ]' 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.886 00:04:40.886 real 0m0.273s 00:04:40.886 user 0m0.163s 00:04:40.886 sys 0m0.044s 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.886 10:32:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.886 ************************************ 00:04:40.886 END TEST rpc_daemon_integrity 00:04:40.886 ************************************ 00:04:40.886 10:32:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.886 10:32:09 rpc -- rpc/rpc.sh@84 -- # killprocess 3880381 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@949 -- # '[' -z 3880381 ']' 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@953 -- # kill -0 3880381 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@954 -- # uname 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3880381 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3880381' 00:04:40.886 killing process with pid 3880381 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@968 -- # kill 3880381 00:04:40.886 10:32:09 rpc -- common/autotest_common.sh@973 -- # wait 3880381 00:04:41.455 00:04:41.455 real 0m2.430s 00:04:41.455 user 0m3.101s 00:04:41.455 sys 0m0.675s 00:04:41.455 10:32:10 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.455 10:32:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.455 ************************************ 00:04:41.455 END TEST rpc 00:04:41.455 ************************************ 00:04:41.455 10:32:10 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.455 10:32:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:41.455 10:32:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:41.455 10:32:10 -- common/autotest_common.sh@10 -- # set +x 00:04:41.455 ************************************ 00:04:41.455 START TEST skip_rpc 00:04:41.455 ************************************ 00:04:41.455 10:32:10 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:41.455 * Looking for test storage... 00:04:41.455 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:41.455 10:32:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:41.455 10:32:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:41.455 10:32:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.455 10:32:10 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:41.455 10:32:10 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:41.455 10:32:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.455 ************************************ 00:04:41.455 START TEST skip_rpc 00:04:41.455 ************************************ 00:04:41.455 10:32:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:04:41.455 10:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3881007 00:04:41.455 10:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.455 10:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.455 10:32:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.455 [2024-06-10 10:32:10.454266] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:41.455 [2024-06-10 10:32:10.454301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881007 ] 00:04:41.455 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.713 [2024-06-10 10:32:10.513342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.714 [2024-06-10 10:32:10.585397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3881007 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 3881007 ']' 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 3881007 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3881007 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3881007' 00:04:46.985 killing process with pid 3881007 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 3881007 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 3881007 00:04:46.985 00:04:46.985 real 0m5.366s 00:04:46.985 user 0m5.126s 00:04:46.985 sys 0m0.264s 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:46.985 10:32:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.985 ************************************ 00:04:46.985 END TEST skip_rpc 00:04:46.985 ************************************ 00:04:46.985 10:32:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.985 10:32:15 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:46.985 10:32:15 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:46.985 10:32:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.985 ************************************ 00:04:46.985 START TEST skip_rpc_with_json 00:04:46.985 ************************************ 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3881942 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3881942 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 3881942 ']' 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.985 10:32:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:46.986 10:32:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.986 [2024-06-10 10:32:15.892458] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:46.986 [2024-06-10 10:32:15.892497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881942 ] 00:04:46.986 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.986 [2024-06-10 10:32:15.953268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.245 [2024-06-10 10:32:16.022535] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.813 [2024-06-10 10:32:16.681101] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.813 request: 00:04:47.813 { 00:04:47.813 "trtype": "tcp", 00:04:47.813 "method": "nvmf_get_transports", 00:04:47.813 "req_id": 1 00:04:47.813 } 00:04:47.813 Got JSON-RPC error response 00:04:47.813 response: 00:04:47.813 { 00:04:47.813 "code": -19, 00:04:47.813 "message": "No such device" 00:04:47.813 } 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.813 [2024-06-10 10:32:16.689199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.813 10:32:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:47.813 { 00:04:47.813 "subsystems": [ 00:04:47.813 { 00:04:47.813 "subsystem": "keyring", 00:04:47.813 "config": [] 00:04:47.813 }, 00:04:47.813 { 00:04:47.813 "subsystem": "iobuf", 00:04:47.813 "config": [ 00:04:47.813 { 00:04:47.813 "method": "iobuf_set_options", 00:04:47.813 "params": { 00:04:47.813 "small_pool_count": 8192, 00:04:47.813 "large_pool_count": 1024, 00:04:47.813 "small_bufsize": 8192, 00:04:47.813 "large_bufsize": 135168 00:04:47.813 } 00:04:47.813 } 00:04:47.813 ] 00:04:47.813 }, 00:04:47.813 { 00:04:47.813 "subsystem": "sock", 00:04:47.813 "config": [ 00:04:47.813 { 00:04:47.813 "method": "sock_set_default_impl", 00:04:47.813 "params": { 00:04:47.813 "impl_name": "posix" 00:04:47.813 } 00:04:47.813 }, 00:04:47.813 { 00:04:47.813 "method": "sock_impl_set_options", 00:04:47.813 "params": { 00:04:47.813 "impl_name": "ssl", 00:04:47.813 "recv_buf_size": 4096, 00:04:47.813 "send_buf_size": 4096, 00:04:47.813 "enable_recv_pipe": true, 00:04:47.813 "enable_quickack": false, 00:04:47.813 "enable_placement_id": 0, 00:04:47.813 "enable_zerocopy_send_server": true, 00:04:47.813 "enable_zerocopy_send_client": false, 00:04:47.813 "zerocopy_threshold": 0, 00:04:47.813 "tls_version": 0, 00:04:47.813 "enable_ktls": false 00:04:47.813 } 00:04:47.813 }, 00:04:47.813 { 00:04:47.813 "method": "sock_impl_set_options", 00:04:47.813 "params": { 00:04:47.813 "impl_name": "posix", 00:04:47.813 "recv_buf_size": 2097152, 00:04:47.813 "send_buf_size": 2097152, 00:04:47.813 "enable_recv_pipe": true, 00:04:47.813 "enable_quickack": false, 00:04:47.813 "enable_placement_id": 0, 00:04:47.813 "enable_zerocopy_send_server": true, 00:04:47.813 "enable_zerocopy_send_client": false, 00:04:47.813 "zerocopy_threshold": 0, 00:04:47.813 "tls_version": 0, 00:04:47.813 "enable_ktls": false 00:04:47.813 } 00:04:47.813 } 00:04:47.813 ] 00:04:47.813 }, 00:04:47.813 { 00:04:47.813 "subsystem": "vmd", 00:04:47.813 "config": [] 00:04:47.813 }, 00:04:47.813 { 00:04:47.813 "subsystem": "accel", 00:04:47.813 "config": [ 00:04:47.813 { 00:04:47.813 "method": "accel_set_options", 00:04:47.813 "params": { 00:04:47.813 "small_cache_size": 128, 00:04:47.813 "large_cache_size": 16, 00:04:47.813 "task_count": 2048, 00:04:47.813 "sequence_count": 2048, 00:04:47.813 "buf_count": 2048 00:04:47.813 } 00:04:47.813 } 00:04:47.813 ] 00:04:47.813 }, 00:04:47.813 { 00:04:47.813 "subsystem": "bdev", 00:04:47.813 "config": [ 00:04:47.813 { 00:04:47.813 "method": "bdev_set_options", 00:04:47.813 "params": { 00:04:47.813 "bdev_io_pool_size": 65535, 00:04:47.813 "bdev_io_cache_size": 256, 00:04:47.813 "bdev_auto_examine": true, 00:04:47.813 "iobuf_small_cache_size": 128, 00:04:47.813 "iobuf_large_cache_size": 16 00:04:47.814 } 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "method": "bdev_raid_set_options", 00:04:47.814 "params": { 00:04:47.814 "process_window_size_kb": 1024 00:04:47.814 } 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "method": "bdev_iscsi_set_options", 00:04:47.814 "params": { 00:04:47.814 "timeout_sec": 30 00:04:47.814 } 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "method": "bdev_nvme_set_options", 00:04:47.814 "params": { 00:04:47.814 "action_on_timeout": "none", 00:04:47.814 "timeout_us": 0, 00:04:47.814 "timeout_admin_us": 0, 00:04:47.814 "keep_alive_timeout_ms": 10000, 00:04:47.814 "arbitration_burst": 0, 00:04:47.814 "low_priority_weight": 0, 00:04:47.814 "medium_priority_weight": 0, 00:04:47.814 "high_priority_weight": 0, 00:04:47.814 "nvme_adminq_poll_period_us": 10000, 00:04:47.814 "nvme_ioq_poll_period_us": 0, 00:04:47.814 "io_queue_requests": 0, 00:04:47.814 "delay_cmd_submit": true, 00:04:47.814 "transport_retry_count": 4, 00:04:47.814 "bdev_retry_count": 3, 00:04:47.814 "transport_ack_timeout": 0, 00:04:47.814 "ctrlr_loss_timeout_sec": 0, 00:04:47.814 "reconnect_delay_sec": 0, 00:04:47.814 "fast_io_fail_timeout_sec": 0, 00:04:47.814 "disable_auto_failback": false, 00:04:47.814 "generate_uuids": false, 00:04:47.814 "transport_tos": 0, 00:04:47.814 "nvme_error_stat": false, 00:04:47.814 "rdma_srq_size": 0, 00:04:47.814 "io_path_stat": false, 00:04:47.814 "allow_accel_sequence": false, 00:04:47.814 "rdma_max_cq_size": 0, 00:04:47.814 "rdma_cm_event_timeout_ms": 0, 00:04:47.814 "dhchap_digests": [ 00:04:47.814 "sha256", 00:04:47.814 "sha384", 00:04:47.814 "sha512" 00:04:47.814 ], 00:04:47.814 "dhchap_dhgroups": [ 00:04:47.814 "null", 00:04:47.814 "ffdhe2048", 00:04:47.814 "ffdhe3072", 00:04:47.814 "ffdhe4096", 00:04:47.814 "ffdhe6144", 00:04:47.814 "ffdhe8192" 00:04:47.814 ] 00:04:47.814 } 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "method": "bdev_nvme_set_hotplug", 00:04:47.814 "params": { 00:04:47.814 "period_us": 100000, 00:04:47.814 "enable": false 00:04:47.814 } 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "method": "bdev_wait_for_examine" 00:04:47.814 } 00:04:47.814 ] 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "subsystem": "scsi", 00:04:47.814 "config": null 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "subsystem": "scheduler", 00:04:47.814 "config": [ 00:04:47.814 { 00:04:47.814 "method": "framework_set_scheduler", 00:04:47.814 "params": { 00:04:47.814 "name": "static" 00:04:47.814 } 00:04:47.814 } 00:04:47.814 ] 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "subsystem": "vhost_scsi", 00:04:47.814 "config": [] 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "subsystem": "vhost_blk", 00:04:47.814 "config": [] 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "subsystem": "ublk", 00:04:47.814 "config": [] 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "subsystem": "nbd", 00:04:47.814 "config": [] 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "subsystem": "nvmf", 00:04:47.814 "config": [ 00:04:47.814 { 00:04:47.814 "method": "nvmf_set_config", 00:04:47.814 "params": { 00:04:47.814 "discovery_filter": "match_any", 00:04:47.814 "admin_cmd_passthru": { 00:04:47.814 "identify_ctrlr": false 00:04:47.814 } 00:04:47.814 } 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "method": "nvmf_set_max_subsystems", 00:04:47.814 "params": { 00:04:47.814 "max_subsystems": 1024 00:04:47.814 } 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "method": "nvmf_set_crdt", 00:04:47.814 "params": { 00:04:47.814 "crdt1": 0, 00:04:47.814 "crdt2": 0, 00:04:47.814 "crdt3": 0 00:04:47.814 } 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "method": "nvmf_create_transport", 00:04:47.814 "params": { 00:04:47.814 "trtype": "TCP", 00:04:47.814 "max_queue_depth": 128, 00:04:47.814 "max_io_qpairs_per_ctrlr": 127, 00:04:47.814 "in_capsule_data_size": 4096, 00:04:47.814 "max_io_size": 131072, 00:04:47.814 "io_unit_size": 131072, 00:04:47.814 "max_aq_depth": 128, 00:04:47.814 "num_shared_buffers": 511, 00:04:47.814 "buf_cache_size": 4294967295, 00:04:47.814 "dif_insert_or_strip": false, 00:04:47.814 "zcopy": false, 00:04:47.814 "c2h_success": true, 00:04:47.814 "sock_priority": 0, 00:04:47.814 "abort_timeout_sec": 1, 00:04:47.814 "ack_timeout": 0, 00:04:47.814 "data_wr_pool_size": 0 00:04:47.814 } 00:04:47.814 } 00:04:47.814 ] 00:04:47.814 }, 00:04:47.814 { 00:04:47.814 "subsystem": "iscsi", 00:04:47.814 "config": [ 00:04:47.814 { 00:04:47.814 "method": "iscsi_set_options", 00:04:47.814 "params": { 00:04:47.814 "node_base": "iqn.2016-06.io.spdk", 00:04:47.814 "max_sessions": 128, 00:04:47.814 "max_connections_per_session": 2, 00:04:47.814 "max_queue_depth": 64, 00:04:47.814 "default_time2wait": 2, 00:04:47.814 "default_time2retain": 20, 00:04:47.814 "first_burst_length": 8192, 00:04:47.814 "immediate_data": true, 00:04:47.814 "allow_duplicated_isid": false, 00:04:47.814 "error_recovery_level": 0, 00:04:47.814 "nop_timeout": 60, 00:04:47.814 "nop_in_interval": 30, 00:04:47.814 "disable_chap": false, 00:04:47.814 "require_chap": false, 00:04:47.814 "mutual_chap": false, 00:04:47.814 "chap_group": 0, 00:04:47.814 "max_large_datain_per_connection": 64, 00:04:47.814 "max_r2t_per_connection": 4, 00:04:47.814 "pdu_pool_size": 36864, 00:04:47.814 "immediate_data_pool_size": 16384, 00:04:47.814 "data_out_pool_size": 2048 00:04:47.814 } 00:04:47.814 } 00:04:47.814 ] 00:04:47.814 } 00:04:47.814 ] 00:04:47.814 } 00:04:47.814 10:32:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.814 10:32:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3881942 00:04:47.814 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3881942 ']' 00:04:47.814 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3881942 00:04:47.814 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:47.814 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:47.814 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3881942 00:04:48.073 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:48.073 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:48.073 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3881942' 00:04:48.073 killing process with pid 3881942 00:04:48.074 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3881942 00:04:48.074 10:32:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3881942 00:04:48.333 10:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3882181 00:04:48.333 10:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.333 10:32:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3882181 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3882181 ']' 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3882181 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3882181 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3882181' 00:04:53.605 killing process with pid 3882181 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3882181 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3882181 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:53.605 00:04:53.605 real 0m6.686s 00:04:53.605 user 0m6.503s 00:04:53.605 sys 0m0.558s 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.605 ************************************ 00:04:53.605 END TEST skip_rpc_with_json 00:04:53.605 ************************************ 00:04:53.605 10:32:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.605 10:32:22 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.605 10:32:22 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.605 10:32:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.605 ************************************ 00:04:53.605 START TEST skip_rpc_with_delay 00:04:53.605 ************************************ 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:53.605 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.606 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.865 [2024-06-10 10:32:22.635985] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.865 [2024-06-10 10:32:22.636044] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:53.865 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:53.865 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:53.865 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:53.865 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:53.865 00:04:53.865 real 0m0.059s 00:04:53.865 user 0m0.030s 00:04:53.865 sys 0m0.028s 00:04:53.865 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.865 10:32:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.865 ************************************ 00:04:53.865 END TEST skip_rpc_with_delay 00:04:53.865 ************************************ 00:04:53.865 10:32:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.865 10:32:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.865 10:32:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.865 10:32:22 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.865 10:32:22 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.865 10:32:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.865 ************************************ 00:04:53.865 START TEST exit_on_failed_rpc_init 00:04:53.865 ************************************ 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3883139 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3883139 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 3883139 ']' 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:53.865 10:32:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.865 [2024-06-10 10:32:22.754409] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:53.865 [2024-06-10 10:32:22.754446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883139 ] 00:04:53.865 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.865 [2024-06-10 10:32:22.813350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.865 [2024-06-10 10:32:22.891381] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.801 [2024-06-10 10:32:23.565209] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:54.801 [2024-06-10 10:32:23.565256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883365 ] 00:04:54.801 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.801 [2024-06-10 10:32:23.623415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.801 [2024-06-10 10:32:23.693758] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.801 [2024-06-10 10:32:23.693820] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.801 [2024-06-10 10:32:23.693829] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.801 [2024-06-10 10:32:23.693834] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3883139 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 3883139 ']' 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 3883139 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3883139 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3883139' 00:04:54.801 killing process with pid 3883139 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 3883139 00:04:54.801 10:32:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 3883139 00:04:55.369 00:04:55.369 real 0m1.399s 00:04:55.369 user 0m1.582s 00:04:55.369 sys 0m0.388s 00:04:55.369 10:32:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:55.369 10:32:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.369 ************************************ 00:04:55.369 END TEST exit_on_failed_rpc_init 00:04:55.369 ************************************ 00:04:55.369 10:32:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:55.369 00:04:55.369 real 0m13.847s 00:04:55.369 user 0m13.361s 00:04:55.369 sys 0m1.479s 00:04:55.369 10:32:24 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:55.369 10:32:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.369 ************************************ 00:04:55.369 END TEST skip_rpc 00:04:55.369 ************************************ 00:04:55.369 10:32:24 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.369 10:32:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:55.369 10:32:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:55.369 10:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:55.369 ************************************ 00:04:55.369 START TEST rpc_client 00:04:55.369 ************************************ 00:04:55.369 10:32:24 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.369 * Looking for test storage... 00:04:55.369 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client 00:04:55.369 10:32:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:55.369 OK 00:04:55.369 10:32:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.369 00:04:55.369 real 0m0.111s 00:04:55.369 user 0m0.049s 00:04:55.369 sys 0m0.070s 00:04:55.369 10:32:24 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:55.369 10:32:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.369 ************************************ 00:04:55.370 END TEST rpc_client 00:04:55.370 ************************************ 00:04:55.370 10:32:24 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.370 10:32:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:55.370 10:32:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:55.370 10:32:24 -- common/autotest_common.sh@10 -- # set +x 00:04:55.370 ************************************ 00:04:55.370 START TEST json_config 00:04:55.370 ************************************ 00:04:55.370 10:32:24 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:04:55.633 10:32:24 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.633 10:32:24 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.633 10:32:24 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.633 10:32:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.633 10:32:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.633 10:32:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.633 10:32:24 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.633 10:32:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@47 -- # : 0 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:55.633 10:32:24 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/common.sh 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json') 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:55.633 INFO: JSON configuration test init 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:55.633 10:32:24 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:55.633 10:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.633 10:32:24 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:55.633 10:32:24 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:55.633 10:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.634 10:32:24 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:55.634 10:32:24 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.634 10:32:24 json_config -- json_config/common.sh@10 -- # shift 00:04:55.634 10:32:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.634 10:32:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.634 10:32:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.634 10:32:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.634 10:32:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.634 10:32:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3883630 00:04:55.634 10:32:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.634 Waiting for target to run... 00:04:55.634 10:32:24 json_config -- json_config/common.sh@25 -- # waitforlisten 3883630 /var/tmp/spdk_tgt.sock 00:04:55.634 10:32:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:55.634 10:32:24 json_config -- common/autotest_common.sh@830 -- # '[' -z 3883630 ']' 00:04:55.634 10:32:24 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.634 10:32:24 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:55.634 10:32:24 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.634 10:32:24 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:55.634 10:32:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.634 [2024-06-10 10:32:24.538619] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:55.634 [2024-06-10 10:32:24.538667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883630 ] 00:04:55.634 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.253 [2024-06-10 10:32:24.991774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.253 [2024-06-10 10:32:25.080492] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.513 10:32:25 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:56.513 10:32:25 json_config -- common/autotest_common.sh@863 -- # return 0 00:04:56.513 10:32:25 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.513 00:04:56.513 10:32:25 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:56.513 10:32:25 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:56.513 10:32:25 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:56.513 10:32:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.513 10:32:25 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:56.513 10:32:25 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:56.513 10:32:25 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:56.513 10:32:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.513 10:32:25 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:56.513 10:32:25 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:56.513 10:32:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:59.801 10:32:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:04:59.801 10:32:28 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:04:59.801 10:32:28 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:04:59.801 10:32:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:06.391 10:32:34 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:06.392 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:06.392 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@375 -- # (( 0 != 1 )) 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@375 -- # modprobe -r irdma 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@377 -- # modinfo irdma 00:05:06.392 10:32:34 json_config -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:06.392 Found net devices under 0000:af:00.0: cvl_0_0 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:06.392 Found net devices under 0000:af:00.1: cvl_0_1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@58 -- # uname 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@104 -- # echo cvl_0_0 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@104 -- # echo cvl_0_1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@74 -- # ip= 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@76 -- # ip addr add 192.168.100.8/24 dev cvl_0_0 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@77 -- # ip link set cvl_0_0 up 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:05:06.392 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:05:06.392 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:05:06.392 altname enp175s0f0np0 00:05:06.392 altname ens801f0np0 00:05:06.392 inet 192.168.100.8/24 scope global cvl_0_0 00:05:06.392 valid_lft forever preferred_lft forever 00:05:06.392 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:05:06.392 valid_lft forever preferred_lft forever 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@74 -- # ip= 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@76 -- # ip addr add 192.168.100.9/24 dev cvl_0_1 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@77 -- # ip link set cvl_0_1 up 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:05:06.392 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:05:06.392 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:05:06.392 altname enp175s0f1np1 00:05:06.392 altname ens801f1np1 00:05:06.392 inet 192.168.100.9/24 scope global cvl_0_1 00:05:06.392 valid_lft forever preferred_lft forever 00:05:06.392 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:05:06.392 valid_lft forever preferred_lft forever 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@422 -- # return 0 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@104 -- # echo cvl_0_0 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:06.392 10:32:35 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@104 -- # echo cvl_0_1 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:06.393 192.168.100.9' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:06.393 192.168.100.9' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:06.393 192.168.100.9' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:06.393 10:32:35 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:06.393 10:32:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:06.393 10:32:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.393 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.393 MallocForNvmf0 00:05:06.393 10:32:35 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.393 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.652 MallocForNvmf1 00:05:06.652 10:32:35 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:06.652 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:06.911 [2024-06-10 10:32:35.730803] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:06.911 [2024-06-10 10:32:35.745095] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6ce990/0x6cdfd0) succeed. 00:05:06.911 [2024-06-10 10:32:35.755184] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6d0ad0/0x6ce550) succeed. 00:05:06.911 [2024-06-10 10:32:35.755213] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:05:06.911 [2024-06-10 10:32:35.757399] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:05:06.911 [2024-06-10 10:32:35.757412] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:06.911 [2024-06-10 10:32:35.759041] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:05:06.911 10:32:35 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.911 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.911 10:32:35 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.911 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.170 10:32:36 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.170 10:32:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.429 10:32:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:07.429 10:32:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:07.429 [2024-06-10 10:32:36.405034] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:07.429 10:32:36 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:07.429 10:32:36 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:07.429 10:32:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.688 10:32:36 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:07.688 10:32:36 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:07.688 10:32:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.688 10:32:36 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:07.688 10:32:36 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.688 10:32:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.688 MallocBdevForConfigChangeCheck 00:05:07.688 10:32:36 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:07.688 10:32:36 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:07.688 10:32:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.688 10:32:36 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:07.688 10:32:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.256 10:32:36 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:08.256 INFO: shutting down applications... 00:05:08.256 10:32:36 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:08.256 10:32:36 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:08.256 10:32:36 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:08.256 10:32:36 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:09.634 Calling clear_iscsi_subsystem 00:05:09.634 Calling clear_nvmf_subsystem 00:05:09.634 Calling clear_nbd_subsystem 00:05:09.634 Calling clear_ublk_subsystem 00:05:09.634 Calling clear_vhost_blk_subsystem 00:05:09.634 Calling clear_vhost_scsi_subsystem 00:05:09.634 Calling clear_bdev_subsystem 00:05:09.634 10:32:38 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py 00:05:09.634 10:32:38 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:09.634 10:32:38 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:09.634 10:32:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.634 10:32:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:09.634 10:32:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:09.893 10:32:38 json_config -- json_config/json_config.sh@345 -- # break 00:05:09.893 10:32:38 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:09.893 10:32:38 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:09.893 10:32:38 json_config -- json_config/common.sh@31 -- # local app=target 00:05:09.893 10:32:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.893 10:32:38 json_config -- json_config/common.sh@35 -- # [[ -n 3883630 ]] 00:05:09.893 10:32:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3883630 00:05:09.893 10:32:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.893 10:32:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.893 10:32:38 json_config -- json_config/common.sh@41 -- # kill -0 3883630 00:05:09.893 10:32:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.461 10:32:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.461 10:32:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.461 10:32:39 json_config -- json_config/common.sh@41 -- # kill -0 3883630 00:05:10.461 10:32:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:10.461 10:32:39 json_config -- json_config/common.sh@43 -- # break 00:05:10.461 10:32:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:10.461 10:32:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:10.461 SPDK target shutdown done 00:05:10.461 10:32:39 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:10.461 INFO: relaunching applications... 00:05:10.462 10:32:39 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.462 10:32:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:10.462 10:32:39 json_config -- json_config/common.sh@10 -- # shift 00:05:10.462 10:32:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.462 10:32:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.462 10:32:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.462 10:32:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.462 10:32:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.462 10:32:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3888698 00:05:10.462 10:32:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.462 Waiting for target to run... 00:05:10.462 10:32:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.462 10:32:39 json_config -- json_config/common.sh@25 -- # waitforlisten 3888698 /var/tmp/spdk_tgt.sock 00:05:10.462 10:32:39 json_config -- common/autotest_common.sh@830 -- # '[' -z 3888698 ']' 00:05:10.462 10:32:39 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.462 10:32:39 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:10.462 10:32:39 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.462 10:32:39 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:10.462 10:32:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.462 [2024-06-10 10:32:39.425631] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:10.462 [2024-06-10 10:32:39.425680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888698 ] 00:05:10.462 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.031 [2024-06-10 10:32:39.868879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.031 [2024-06-10 10:32:39.958831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.318 [2024-06-10 10:32:42.970742] transport.c: 288:nvmf_transport_create: *WARNING*: The num_shared_buffers value (4095) is larger than the available iobuf pool size (1024). Please increase the iobuf pool sizes. 00:05:14.318 [2024-06-10 10:32:42.984666] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1a0a9e0/0x1a0a020) succeed. 00:05:14.318 [2024-06-10 10:32:42.994094] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1a0cb20/0x1a0a5a0) succeed. 00:05:14.318 [2024-06-10 10:32:42.996152] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:05:14.318 [2024-06-10 10:32:42.996166] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:05:14.318 [2024-06-10 10:32:42.997697] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:05:14.318 [2024-06-10 10:32:43.025900] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:14.576 10:32:43 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:14.577 10:32:43 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:14.577 10:32:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.577 00:05:14.577 10:32:43 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:14.577 10:32:43 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.577 INFO: Checking if target configuration is the same... 00:05:14.577 10:32:43 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.577 10:32:43 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:14.577 10:32:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.577 + '[' 2 -ne 2 ']' 00:05:14.577 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.577 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:05:14.577 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:05:14.577 +++ basename /dev/fd/62 00:05:14.577 ++ mktemp /tmp/62.XXX 00:05:14.835 + tmp_file_1=/tmp/62.V5D 00:05:14.835 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.835 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.835 + tmp_file_2=/tmp/spdk_tgt_config.json.0FZ 00:05:14.835 + ret=0 00:05:14.835 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.094 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.094 + diff -u /tmp/62.V5D /tmp/spdk_tgt_config.json.0FZ 00:05:15.094 + echo 'INFO: JSON config files are the same' 00:05:15.094 INFO: JSON config files are the same 00:05:15.094 + rm /tmp/62.V5D /tmp/spdk_tgt_config.json.0FZ 00:05:15.094 + exit 0 00:05:15.094 10:32:43 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:15.094 10:32:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:15.094 INFO: changing configuration and checking if this can be detected... 00:05:15.094 10:32:43 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.094 10:32:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:15.094 10:32:44 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:15.094 10:32:44 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.094 10:32:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.094 + '[' 2 -ne 2 ']' 00:05:15.094 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:15.094 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:05:15.094 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:05:15.094 +++ basename /dev/fd/62 00:05:15.094 ++ mktemp /tmp/62.XXX 00:05:15.094 + tmp_file_1=/tmp/62.VY7 00:05:15.094 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.094 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:15.094 + tmp_file_2=/tmp/spdk_tgt_config.json.zMp 00:05:15.094 + ret=0 00:05:15.094 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.662 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.662 + diff -u /tmp/62.VY7 /tmp/spdk_tgt_config.json.zMp 00:05:15.662 + ret=1 00:05:15.662 + echo '=== Start of file: /tmp/62.VY7 ===' 00:05:15.662 + cat /tmp/62.VY7 00:05:15.662 + echo '=== End of file: /tmp/62.VY7 ===' 00:05:15.662 + echo '' 00:05:15.662 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zMp ===' 00:05:15.662 + cat /tmp/spdk_tgt_config.json.zMp 00:05:15.662 + echo '=== End of file: /tmp/spdk_tgt_config.json.zMp ===' 00:05:15.662 + echo '' 00:05:15.662 + rm /tmp/62.VY7 /tmp/spdk_tgt_config.json.zMp 00:05:15.662 + exit 1 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:15.662 INFO: configuration change detected. 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@317 -- # [[ -n 3888698 ]] 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.662 10:32:44 json_config -- json_config/json_config.sh@323 -- # killprocess 3888698 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@949 -- # '[' -z 3888698 ']' 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@953 -- # kill -0 3888698 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@954 -- # uname 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3888698 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3888698' 00:05:15.662 killing process with pid 3888698 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@968 -- # kill 3888698 00:05:15.662 10:32:44 json_config -- common/autotest_common.sh@973 -- # wait 3888698 00:05:17.039 10:32:46 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.039 10:32:46 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:17.039 10:32:46 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:17.039 10:32:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.298 10:32:46 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:17.298 10:32:46 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:17.298 INFO: Success 00:05:17.298 10:32:46 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:17.298 10:32:46 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:17.298 10:32:46 json_config -- nvmf/common.sh@117 -- # sync 00:05:17.298 10:32:46 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:17.298 10:32:46 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:17.298 10:32:46 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:05:17.298 10:32:46 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:17.298 10:32:46 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:05:17.298 00:05:17.298 real 0m21.703s 00:05:17.298 user 0m23.932s 00:05:17.298 sys 0m7.016s 00:05:17.298 10:32:46 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.298 10:32:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.298 ************************************ 00:05:17.299 END TEST json_config 00:05:17.299 ************************************ 00:05:17.299 10:32:46 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.299 10:32:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:17.299 10:32:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.299 10:32:46 -- common/autotest_common.sh@10 -- # set +x 00:05:17.299 ************************************ 00:05:17.299 START TEST json_config_extra_key 00:05:17.299 ************************************ 00:05:17.299 10:32:46 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:05:17.299 10:32:46 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.299 10:32:46 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.299 10:32:46 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.299 10:32:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.299 10:32:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.299 10:32:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.299 10:32:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:17.299 10:32:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:17.299 10:32:46 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/common.sh 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:17.299 INFO: launching applications... 00:05:17.299 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3889945 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.299 Waiting for target to run... 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3889945 /var/tmp/spdk_tgt.sock 00:05:17.299 10:32:46 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 3889945 ']' 00:05:17.299 10:32:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:05:17.299 10:32:46 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.299 10:32:46 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:17.299 10:32:46 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.299 10:32:46 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:17.299 10:32:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.299 [2024-06-10 10:32:46.294259] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:17.299 [2024-06-10 10:32:46.294307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889945 ] 00:05:17.299 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.867 [2024-06-10 10:32:46.730241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.868 [2024-06-10 10:32:46.815269] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.127 10:32:47 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:18.127 10:32:47 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:18.127 00:05:18.127 10:32:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:18.127 INFO: shutting down applications... 00:05:18.127 10:32:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3889945 ]] 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3889945 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3889945 00:05:18.127 10:32:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.694 10:32:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.694 10:32:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.694 10:32:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3889945 00:05:18.694 10:32:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.694 10:32:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:18.694 10:32:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.694 10:32:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.694 SPDK target shutdown done 00:05:18.694 10:32:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:18.694 Success 00:05:18.694 00:05:18.694 real 0m1.434s 00:05:18.694 user 0m1.046s 00:05:18.694 sys 0m0.531s 00:05:18.694 10:32:47 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:18.694 10:32:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.694 ************************************ 00:05:18.694 END TEST json_config_extra_key 00:05:18.694 ************************************ 00:05:18.694 10:32:47 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.694 10:32:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:18.694 10:32:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:18.694 10:32:47 -- common/autotest_common.sh@10 -- # set +x 00:05:18.694 ************************************ 00:05:18.694 START TEST alias_rpc 00:05:18.694 ************************************ 00:05:18.694 10:32:47 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:18.694 * Looking for test storage... 00:05:18.694 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc 00:05:18.694 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:18.953 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3890223 00:05:18.953 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3890223 00:05:18.953 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.953 10:32:47 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 3890223 ']' 00:05:18.953 10:32:47 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.953 10:32:47 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:18.953 10:32:47 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.953 10:32:47 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:18.953 10:32:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.953 [2024-06-10 10:32:47.771685] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:18.953 [2024-06-10 10:32:47.771734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890223 ] 00:05:18.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.953 [2024-06-10 10:32:47.833142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.953 [2024-06-10 10:32:47.903362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:19.892 10:32:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:19.892 10:32:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3890223 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 3890223 ']' 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 3890223 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3890223 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3890223' 00:05:19.892 killing process with pid 3890223 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@968 -- # kill 3890223 00:05:19.892 10:32:48 alias_rpc -- common/autotest_common.sh@973 -- # wait 3890223 00:05:20.154 00:05:20.154 real 0m1.474s 00:05:20.154 user 0m1.628s 00:05:20.154 sys 0m0.382s 00:05:20.154 10:32:49 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.154 10:32:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.154 ************************************ 00:05:20.154 END TEST alias_rpc 00:05:20.154 ************************************ 00:05:20.154 10:32:49 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:20.154 10:32:49 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:20.154 10:32:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.154 10:32:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.154 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:05:20.154 ************************************ 00:05:20.154 START TEST spdkcli_tcp 00:05:20.154 ************************************ 00:05:20.154 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:20.486 * Looking for test storage... 00:05:20.486 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:05:20.486 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:05:20.486 10:32:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:20.486 10:32:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:05:20.486 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:20.486 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:20.486 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:20.486 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:20.487 10:32:49 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:20.487 10:32:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.487 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3890510 00:05:20.487 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3890510 00:05:20.487 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:20.487 10:32:49 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 3890510 ']' 00:05:20.487 10:32:49 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.487 10:32:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:20.487 10:32:49 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.487 10:32:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:20.487 10:32:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.487 [2024-06-10 10:32:49.311068] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:20.487 [2024-06-10 10:32:49.311115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890510 ] 00:05:20.487 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.487 [2024-06-10 10:32:49.370778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.487 [2024-06-10 10:32:49.442415] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.487 [2024-06-10 10:32:49.442418] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.424 10:32:50 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:21.424 10:32:50 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:21.424 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3890687 00:05:21.424 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:21.424 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:21.424 [ 00:05:21.424 "bdev_malloc_delete", 00:05:21.424 "bdev_malloc_create", 00:05:21.424 "bdev_null_resize", 00:05:21.424 "bdev_null_delete", 00:05:21.424 "bdev_null_create", 00:05:21.424 "bdev_nvme_cuse_unregister", 00:05:21.424 "bdev_nvme_cuse_register", 00:05:21.424 "bdev_opal_new_user", 00:05:21.424 "bdev_opal_set_lock_state", 00:05:21.424 "bdev_opal_delete", 00:05:21.424 "bdev_opal_get_info", 00:05:21.424 "bdev_opal_create", 00:05:21.424 "bdev_nvme_opal_revert", 00:05:21.424 "bdev_nvme_opal_init", 00:05:21.424 "bdev_nvme_send_cmd", 00:05:21.424 "bdev_nvme_get_path_iostat", 00:05:21.424 "bdev_nvme_get_mdns_discovery_info", 00:05:21.424 "bdev_nvme_stop_mdns_discovery", 00:05:21.424 "bdev_nvme_start_mdns_discovery", 00:05:21.424 "bdev_nvme_set_multipath_policy", 00:05:21.424 "bdev_nvme_set_preferred_path", 00:05:21.424 "bdev_nvme_get_io_paths", 00:05:21.424 "bdev_nvme_remove_error_injection", 00:05:21.424 "bdev_nvme_add_error_injection", 00:05:21.424 "bdev_nvme_get_discovery_info", 00:05:21.424 "bdev_nvme_stop_discovery", 00:05:21.424 "bdev_nvme_start_discovery", 00:05:21.424 "bdev_nvme_get_controller_health_info", 00:05:21.424 "bdev_nvme_disable_controller", 00:05:21.424 "bdev_nvme_enable_controller", 00:05:21.424 "bdev_nvme_reset_controller", 00:05:21.424 "bdev_nvme_get_transport_statistics", 00:05:21.424 "bdev_nvme_apply_firmware", 00:05:21.424 "bdev_nvme_detach_controller", 00:05:21.424 "bdev_nvme_get_controllers", 00:05:21.424 "bdev_nvme_attach_controller", 00:05:21.424 "bdev_nvme_set_hotplug", 00:05:21.424 "bdev_nvme_set_options", 00:05:21.424 "bdev_passthru_delete", 00:05:21.424 "bdev_passthru_create", 00:05:21.424 "bdev_lvol_set_parent_bdev", 00:05:21.424 "bdev_lvol_set_parent", 00:05:21.424 "bdev_lvol_check_shallow_copy", 00:05:21.424 "bdev_lvol_start_shallow_copy", 00:05:21.424 "bdev_lvol_grow_lvstore", 00:05:21.424 "bdev_lvol_get_lvols", 00:05:21.424 "bdev_lvol_get_lvstores", 00:05:21.424 "bdev_lvol_delete", 00:05:21.424 "bdev_lvol_set_read_only", 00:05:21.424 "bdev_lvol_resize", 00:05:21.424 "bdev_lvol_decouple_parent", 00:05:21.424 "bdev_lvol_inflate", 00:05:21.424 "bdev_lvol_rename", 00:05:21.425 "bdev_lvol_clone_bdev", 00:05:21.425 "bdev_lvol_clone", 00:05:21.425 "bdev_lvol_snapshot", 00:05:21.425 "bdev_lvol_create", 00:05:21.425 "bdev_lvol_delete_lvstore", 00:05:21.425 "bdev_lvol_rename_lvstore", 00:05:21.425 "bdev_lvol_create_lvstore", 00:05:21.425 "bdev_raid_set_options", 00:05:21.425 "bdev_raid_remove_base_bdev", 00:05:21.425 "bdev_raid_add_base_bdev", 00:05:21.425 "bdev_raid_delete", 00:05:21.425 "bdev_raid_create", 00:05:21.425 "bdev_raid_get_bdevs", 00:05:21.425 "bdev_error_inject_error", 00:05:21.425 "bdev_error_delete", 00:05:21.425 "bdev_error_create", 00:05:21.425 "bdev_split_delete", 00:05:21.425 "bdev_split_create", 00:05:21.425 "bdev_delay_delete", 00:05:21.425 "bdev_delay_create", 00:05:21.425 "bdev_delay_update_latency", 00:05:21.425 "bdev_zone_block_delete", 00:05:21.425 "bdev_zone_block_create", 00:05:21.425 "blobfs_create", 00:05:21.425 "blobfs_detect", 00:05:21.425 "blobfs_set_cache_size", 00:05:21.425 "bdev_aio_delete", 00:05:21.425 "bdev_aio_rescan", 00:05:21.425 "bdev_aio_create", 00:05:21.425 "bdev_ftl_set_property", 00:05:21.425 "bdev_ftl_get_properties", 00:05:21.425 "bdev_ftl_get_stats", 00:05:21.425 "bdev_ftl_unmap", 00:05:21.425 "bdev_ftl_unload", 00:05:21.425 "bdev_ftl_delete", 00:05:21.425 "bdev_ftl_load", 00:05:21.425 "bdev_ftl_create", 00:05:21.425 "bdev_virtio_attach_controller", 00:05:21.425 "bdev_virtio_scsi_get_devices", 00:05:21.425 "bdev_virtio_detach_controller", 00:05:21.425 "bdev_virtio_blk_set_hotplug", 00:05:21.425 "bdev_iscsi_delete", 00:05:21.425 "bdev_iscsi_create", 00:05:21.425 "bdev_iscsi_set_options", 00:05:21.425 "accel_error_inject_error", 00:05:21.425 "ioat_scan_accel_module", 00:05:21.425 "dsa_scan_accel_module", 00:05:21.425 "iaa_scan_accel_module", 00:05:21.425 "keyring_file_remove_key", 00:05:21.425 "keyring_file_add_key", 00:05:21.425 "keyring_linux_set_options", 00:05:21.425 "iscsi_get_histogram", 00:05:21.425 "iscsi_enable_histogram", 00:05:21.425 "iscsi_set_options", 00:05:21.425 "iscsi_get_auth_groups", 00:05:21.425 "iscsi_auth_group_remove_secret", 00:05:21.425 "iscsi_auth_group_add_secret", 00:05:21.425 "iscsi_delete_auth_group", 00:05:21.425 "iscsi_create_auth_group", 00:05:21.425 "iscsi_set_discovery_auth", 00:05:21.425 "iscsi_get_options", 00:05:21.425 "iscsi_target_node_request_logout", 00:05:21.425 "iscsi_target_node_set_redirect", 00:05:21.425 "iscsi_target_node_set_auth", 00:05:21.425 "iscsi_target_node_add_lun", 00:05:21.425 "iscsi_get_stats", 00:05:21.425 "iscsi_get_connections", 00:05:21.425 "iscsi_portal_group_set_auth", 00:05:21.425 "iscsi_start_portal_group", 00:05:21.425 "iscsi_delete_portal_group", 00:05:21.425 "iscsi_create_portal_group", 00:05:21.425 "iscsi_get_portal_groups", 00:05:21.425 "iscsi_delete_target_node", 00:05:21.425 "iscsi_target_node_remove_pg_ig_maps", 00:05:21.425 "iscsi_target_node_add_pg_ig_maps", 00:05:21.425 "iscsi_create_target_node", 00:05:21.425 "iscsi_get_target_nodes", 00:05:21.425 "iscsi_delete_initiator_group", 00:05:21.425 "iscsi_initiator_group_remove_initiators", 00:05:21.425 "iscsi_initiator_group_add_initiators", 00:05:21.425 "iscsi_create_initiator_group", 00:05:21.425 "iscsi_get_initiator_groups", 00:05:21.425 "nvmf_set_crdt", 00:05:21.425 "nvmf_set_config", 00:05:21.425 "nvmf_set_max_subsystems", 00:05:21.425 "nvmf_stop_mdns_prr", 00:05:21.425 "nvmf_publish_mdns_prr", 00:05:21.425 "nvmf_subsystem_get_listeners", 00:05:21.425 "nvmf_subsystem_get_qpairs", 00:05:21.425 "nvmf_subsystem_get_controllers", 00:05:21.425 "nvmf_get_stats", 00:05:21.425 "nvmf_get_transports", 00:05:21.425 "nvmf_create_transport", 00:05:21.425 "nvmf_get_targets", 00:05:21.425 "nvmf_delete_target", 00:05:21.425 "nvmf_create_target", 00:05:21.425 "nvmf_subsystem_allow_any_host", 00:05:21.425 "nvmf_subsystem_remove_host", 00:05:21.425 "nvmf_subsystem_add_host", 00:05:21.425 "nvmf_ns_remove_host", 00:05:21.425 "nvmf_ns_add_host", 00:05:21.425 "nvmf_subsystem_remove_ns", 00:05:21.425 "nvmf_subsystem_add_ns", 00:05:21.425 "nvmf_subsystem_listener_set_ana_state", 00:05:21.425 "nvmf_discovery_get_referrals", 00:05:21.425 "nvmf_discovery_remove_referral", 00:05:21.425 "nvmf_discovery_add_referral", 00:05:21.425 "nvmf_subsystem_remove_listener", 00:05:21.425 "nvmf_subsystem_add_listener", 00:05:21.425 "nvmf_delete_subsystem", 00:05:21.425 "nvmf_create_subsystem", 00:05:21.425 "nvmf_get_subsystems", 00:05:21.425 "env_dpdk_get_mem_stats", 00:05:21.425 "nbd_get_disks", 00:05:21.425 "nbd_stop_disk", 00:05:21.425 "nbd_start_disk", 00:05:21.425 "ublk_recover_disk", 00:05:21.425 "ublk_get_disks", 00:05:21.425 "ublk_stop_disk", 00:05:21.425 "ublk_start_disk", 00:05:21.425 "ublk_destroy_target", 00:05:21.425 "ublk_create_target", 00:05:21.425 "virtio_blk_create_transport", 00:05:21.425 "virtio_blk_get_transports", 00:05:21.425 "vhost_controller_set_coalescing", 00:05:21.425 "vhost_get_controllers", 00:05:21.425 "vhost_delete_controller", 00:05:21.425 "vhost_create_blk_controller", 00:05:21.425 "vhost_scsi_controller_remove_target", 00:05:21.425 "vhost_scsi_controller_add_target", 00:05:21.425 "vhost_start_scsi_controller", 00:05:21.425 "vhost_create_scsi_controller", 00:05:21.425 "thread_set_cpumask", 00:05:21.425 "framework_get_scheduler", 00:05:21.425 "framework_set_scheduler", 00:05:21.425 "framework_get_reactors", 00:05:21.425 "thread_get_io_channels", 00:05:21.425 "thread_get_pollers", 00:05:21.425 "thread_get_stats", 00:05:21.425 "framework_monitor_context_switch", 00:05:21.425 "spdk_kill_instance", 00:05:21.425 "log_enable_timestamps", 00:05:21.425 "log_get_flags", 00:05:21.425 "log_clear_flag", 00:05:21.425 "log_set_flag", 00:05:21.425 "log_get_level", 00:05:21.425 "log_set_level", 00:05:21.425 "log_get_print_level", 00:05:21.425 "log_set_print_level", 00:05:21.425 "framework_enable_cpumask_locks", 00:05:21.425 "framework_disable_cpumask_locks", 00:05:21.425 "framework_wait_init", 00:05:21.425 "framework_start_init", 00:05:21.425 "scsi_get_devices", 00:05:21.425 "bdev_get_histogram", 00:05:21.425 "bdev_enable_histogram", 00:05:21.425 "bdev_set_qos_limit", 00:05:21.425 "bdev_set_qd_sampling_period", 00:05:21.425 "bdev_get_bdevs", 00:05:21.425 "bdev_reset_iostat", 00:05:21.425 "bdev_get_iostat", 00:05:21.425 "bdev_examine", 00:05:21.425 "bdev_wait_for_examine", 00:05:21.425 "bdev_set_options", 00:05:21.425 "notify_get_notifications", 00:05:21.425 "notify_get_types", 00:05:21.425 "accel_get_stats", 00:05:21.425 "accel_set_options", 00:05:21.425 "accel_set_driver", 00:05:21.425 "accel_crypto_key_destroy", 00:05:21.425 "accel_crypto_keys_get", 00:05:21.425 "accel_crypto_key_create", 00:05:21.425 "accel_assign_opc", 00:05:21.425 "accel_get_module_info", 00:05:21.425 "accel_get_opc_assignments", 00:05:21.425 "vmd_rescan", 00:05:21.425 "vmd_remove_device", 00:05:21.425 "vmd_enable", 00:05:21.425 "sock_get_default_impl", 00:05:21.425 "sock_set_default_impl", 00:05:21.425 "sock_impl_set_options", 00:05:21.425 "sock_impl_get_options", 00:05:21.425 "iobuf_get_stats", 00:05:21.425 "iobuf_set_options", 00:05:21.425 "framework_get_pci_devices", 00:05:21.425 "framework_get_config", 00:05:21.425 "framework_get_subsystems", 00:05:21.425 "trace_get_info", 00:05:21.425 "trace_get_tpoint_group_mask", 00:05:21.425 "trace_disable_tpoint_group", 00:05:21.425 "trace_enable_tpoint_group", 00:05:21.425 "trace_clear_tpoint_mask", 00:05:21.425 "trace_set_tpoint_mask", 00:05:21.425 "keyring_get_keys", 00:05:21.425 "spdk_get_version", 00:05:21.425 "rpc_get_methods" 00:05:21.425 ] 00:05:21.425 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.425 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:21.425 10:32:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3890510 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 3890510 ']' 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 3890510 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3890510 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3890510' 00:05:21.425 killing process with pid 3890510 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 3890510 00:05:21.425 10:32:50 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 3890510 00:05:21.684 00:05:21.684 real 0m1.500s 00:05:21.684 user 0m2.780s 00:05:21.684 sys 0m0.431s 00:05:21.684 10:32:50 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:21.684 10:32:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.684 ************************************ 00:05:21.684 END TEST spdkcli_tcp 00:05:21.684 ************************************ 00:05:21.684 10:32:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.684 10:32:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:21.684 10:32:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:21.684 10:32:50 -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 ************************************ 00:05:21.944 START TEST dpdk_mem_utility 00:05:21.944 ************************************ 00:05:21.944 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.944 * Looking for test storage... 00:05:21.944 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility 00:05:21.944 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.944 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3890812 00:05:21.944 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3890812 00:05:21.944 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.944 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 3890812 ']' 00:05:21.944 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.944 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:21.944 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.944 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:21.944 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 [2024-06-10 10:32:50.875951] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:21.944 [2024-06-10 10:32:50.876001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890812 ] 00:05:21.944 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.944 [2024-06-10 10:32:50.935712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.203 [2024-06-10 10:32:51.013599] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.771 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:22.771 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:22.771 10:32:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.771 10:32:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.771 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.771 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.771 { 00:05:22.771 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.771 } 00:05:22.771 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.771 10:32:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:22.771 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:22.771 1 heaps totaling size 814.000000 MiB 00:05:22.771 size: 814.000000 MiB heap id: 0 00:05:22.771 end heaps---------- 00:05:22.771 8 mempools totaling size 598.116089 MiB 00:05:22.771 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:22.771 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:22.771 size: 84.521057 MiB name: bdev_io_3890812 00:05:22.771 size: 51.011292 MiB name: evtpool_3890812 00:05:22.771 size: 50.003479 MiB name: msgpool_3890812 00:05:22.771 size: 21.763794 MiB name: PDU_Pool 00:05:22.771 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:22.771 size: 0.026123 MiB name: Session_Pool 00:05:22.771 end mempools------- 00:05:22.771 6 memzones totaling size 4.142822 MiB 00:05:22.771 size: 1.000366 MiB name: RG_ring_0_3890812 00:05:22.771 size: 1.000366 MiB name: RG_ring_1_3890812 00:05:22.771 size: 1.000366 MiB name: RG_ring_4_3890812 00:05:22.771 size: 1.000366 MiB name: RG_ring_5_3890812 00:05:22.771 size: 0.125366 MiB name: RG_ring_2_3890812 00:05:22.771 size: 0.015991 MiB name: RG_ring_3_3890812 00:05:22.771 end memzones------- 00:05:22.771 10:32:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:22.771 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:22.771 list of free elements. size: 12.519348 MiB 00:05:22.771 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:22.771 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:22.771 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:22.771 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:22.771 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:22.771 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:22.771 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:22.771 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:22.771 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:22.771 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:22.771 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:22.771 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:22.771 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:22.771 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:22.771 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:22.771 list of standard malloc elements. size: 199.218079 MiB 00:05:22.771 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:22.771 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:22.771 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:22.771 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:22.771 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:22.771 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:22.771 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:22.771 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:22.771 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:22.771 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:22.771 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:22.772 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:22.772 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:22.772 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:22.772 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:22.772 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:22.772 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:22.772 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:22.772 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:22.772 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:22.772 list of memzone associated elements. size: 602.262573 MiB 00:05:22.772 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:22.772 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:22.772 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:22.772 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:22.772 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:22.772 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3890812_0 00:05:22.772 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:22.772 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3890812_0 00:05:22.772 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:22.772 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3890812_0 00:05:22.772 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:22.772 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:22.772 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:22.772 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:22.772 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:22.772 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3890812 00:05:22.772 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:22.772 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3890812 00:05:22.772 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:22.772 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3890812 00:05:22.772 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:22.772 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:22.772 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:22.772 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:22.772 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:22.772 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:22.772 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:22.772 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:22.772 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:22.772 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3890812 00:05:22.772 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:22.772 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3890812 00:05:22.772 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:22.772 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3890812 00:05:22.772 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:22.772 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3890812 00:05:22.772 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:22.772 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3890812 00:05:22.772 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:22.772 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:22.772 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:22.772 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:22.772 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:22.772 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:22.772 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:22.772 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3890812 00:05:22.772 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:22.772 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:22.772 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:22.772 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:22.772 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:22.772 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3890812 00:05:22.772 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:22.772 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:22.772 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:22.772 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3890812 00:05:22.772 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:22.772 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3890812 00:05:22.772 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:22.772 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:22.772 10:32:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:22.772 10:32:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3890812 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 3890812 ']' 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 3890812 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3890812 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3890812' 00:05:22.772 killing process with pid 3890812 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 3890812 00:05:22.772 10:32:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 3890812 00:05:23.339 00:05:23.339 real 0m1.354s 00:05:23.339 user 0m1.407s 00:05:23.339 sys 0m0.383s 00:05:23.339 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:23.339 10:32:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.339 ************************************ 00:05:23.339 END TEST dpdk_mem_utility 00:05:23.339 ************************************ 00:05:23.339 10:32:52 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:05:23.339 10:32:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:23.339 10:32:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:23.339 10:32:52 -- common/autotest_common.sh@10 -- # set +x 00:05:23.339 ************************************ 00:05:23.339 START TEST event 00:05:23.339 ************************************ 00:05:23.339 10:32:52 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:05:23.339 * Looking for test storage... 00:05:23.339 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:05:23.339 10:32:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:23.339 10:32:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:23.339 10:32:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.339 10:32:52 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:23.339 10:32:52 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:23.339 10:32:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.339 ************************************ 00:05:23.339 START TEST event_perf 00:05:23.339 ************************************ 00:05:23.339 10:32:52 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:23.339 Running I/O for 1 seconds...[2024-06-10 10:32:52.284554] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:23.339 [2024-06-10 10:32:52.284607] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891095 ] 00:05:23.339 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.339 [2024-06-10 10:32:52.342785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.598 [2024-06-10 10:32:52.417071] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.598 [2024-06-10 10:32:52.417168] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.598 [2024-06-10 10:32:52.417272] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.598 [2024-06-10 10:32:52.417276] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.533 Running I/O for 1 seconds... 00:05:24.533 lcore 0: 205608 00:05:24.533 lcore 1: 205607 00:05:24.533 lcore 2: 205607 00:05:24.533 lcore 3: 205608 00:05:24.533 done. 00:05:24.533 00:05:24.533 real 0m1.213s 00:05:24.533 user 0m4.133s 00:05:24.533 sys 0m0.076s 00:05:24.533 10:32:53 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:24.533 10:32:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.533 ************************************ 00:05:24.533 END TEST event_perf 00:05:24.533 ************************************ 00:05:24.533 10:32:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:24.533 10:32:53 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:24.533 10:32:53 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:24.533 10:32:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.533 ************************************ 00:05:24.533 START TEST event_reactor 00:05:24.533 ************************************ 00:05:24.533 10:32:53 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:24.792 [2024-06-10 10:32:53.565377] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:24.792 [2024-06-10 10:32:53.565444] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891345 ] 00:05:24.792 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.792 [2024-06-10 10:32:53.626795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.792 [2024-06-10 10:32:53.696715] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.169 test_start 00:05:26.169 oneshot 00:05:26.169 tick 100 00:05:26.169 tick 100 00:05:26.169 tick 250 00:05:26.169 tick 100 00:05:26.169 tick 100 00:05:26.169 tick 100 00:05:26.169 tick 250 00:05:26.169 tick 500 00:05:26.169 tick 100 00:05:26.169 tick 100 00:05:26.169 tick 250 00:05:26.169 tick 100 00:05:26.169 tick 100 00:05:26.169 test_end 00:05:26.169 00:05:26.169 real 0m1.217s 00:05:26.169 user 0m1.128s 00:05:26.169 sys 0m0.084s 00:05:26.169 10:32:54 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:26.169 10:32:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:26.169 ************************************ 00:05:26.169 END TEST event_reactor 00:05:26.169 ************************************ 00:05:26.169 10:32:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.169 10:32:54 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:26.169 10:32:54 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:26.169 10:32:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.169 ************************************ 00:05:26.169 START TEST event_reactor_perf 00:05:26.169 ************************************ 00:05:26.169 10:32:54 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:26.169 [2024-06-10 10:32:54.840200] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:26.169 [2024-06-10 10:32:54.840265] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891597 ] 00:05:26.169 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.170 [2024-06-10 10:32:54.901894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.170 [2024-06-10 10:32:54.971008] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.107 test_start 00:05:27.107 test_end 00:05:27.107 Performance: 520629 events per second 00:05:27.107 00:05:27.107 real 0m1.216s 00:05:27.107 user 0m1.138s 00:05:27.107 sys 0m0.074s 00:05:27.107 10:32:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.107 10:32:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.107 ************************************ 00:05:27.107 END TEST event_reactor_perf 00:05:27.107 ************************************ 00:05:27.107 10:32:56 event -- event/event.sh@49 -- # uname -s 00:05:27.107 10:32:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:27.107 10:32:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.107 10:32:56 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:27.107 10:32:56 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:27.107 10:32:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.107 ************************************ 00:05:27.107 START TEST event_scheduler 00:05:27.107 ************************************ 00:05:27.107 10:32:56 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:27.366 * Looking for test storage... 00:05:27.366 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler 00:05:27.366 10:32:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:27.366 10:32:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3891866 00:05:27.366 10:32:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.366 10:32:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:27.366 10:32:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3891866 00:05:27.366 10:32:56 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 3891866 ']' 00:05:27.366 10:32:56 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.366 10:32:56 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:27.367 10:32:56 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.367 10:32:56 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:27.367 10:32:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.367 [2024-06-10 10:32:56.226477] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:27.367 [2024-06-10 10:32:56.226515] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891866 ] 00:05:27.367 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.367 [2024-06-10 10:32:56.279099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.367 [2024-06-10 10:32:56.357707] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.367 [2024-06-10 10:32:56.357795] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.367 [2024-06-10 10:32:56.357882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.367 [2024-06-10 10:32:56.357884] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:28.303 10:32:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.303 POWER: Env isn't set yet! 00:05:28.303 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:28.303 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:28.303 POWER: Cannot set governor of lcore 0 to userspace 00:05:28.303 POWER: Attempting to initialise PSTAT power management... 00:05:28.303 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:28.303 POWER: Initialized successfully for lcore 0 power management 00:05:28.303 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:28.303 POWER: Initialized successfully for lcore 1 power management 00:05:28.303 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:28.303 POWER: Initialized successfully for lcore 2 power management 00:05:28.303 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:28.303 POWER: Initialized successfully for lcore 3 power management 00:05:28.303 [2024-06-10 10:32:57.077122] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:28.303 [2024-06-10 10:32:57.077135] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:28.303 [2024-06-10 10:32:57.077144] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.303 10:32:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.303 [2024-06-10 10:32:57.144013] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.303 10:32:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.303 10:32:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.303 ************************************ 00:05:28.303 START TEST scheduler_create_thread 00:05:28.303 ************************************ 00:05:28.303 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:28.303 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:28.303 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 2 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 3 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 4 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 5 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 6 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 7 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 8 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 9 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 10 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.304 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.240 10:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:29.240 10:32:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:29.240 10:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:29.240 10:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.615 10:32:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.615 10:32:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.615 10:32:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.615 10:32:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.615 10:32:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.550 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.550 00:05:31.550 real 0m3.383s 00:05:31.550 user 0m0.023s 00:05:31.550 sys 0m0.004s 00:05:31.550 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:31.550 10:33:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.551 ************************************ 00:05:31.551 END TEST scheduler_create_thread 00:05:31.551 ************************************ 00:05:31.809 10:33:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.809 10:33:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3891866 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 3891866 ']' 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 3891866 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3891866 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3891866' 00:05:31.809 killing process with pid 3891866 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 3891866 00:05:31.809 10:33:00 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 3891866 00:05:32.067 [2024-06-10 10:33:00.944148] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:32.067 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:32.067 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:32.067 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:32.067 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:32.067 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:32.067 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:32.067 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:32.067 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:32.326 00:05:32.326 real 0m5.069s 00:05:32.326 user 0m10.515s 00:05:32.326 sys 0m0.343s 00:05:32.326 10:33:01 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.326 10:33:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.326 ************************************ 00:05:32.326 END TEST event_scheduler 00:05:32.326 ************************************ 00:05:32.326 10:33:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:32.326 10:33:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:32.326 10:33:01 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:32.326 10:33:01 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.326 10:33:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.326 ************************************ 00:05:32.326 START TEST app_repeat 00:05:32.326 ************************************ 00:05:32.326 10:33:01 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3892903 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3892903' 00:05:32.326 Process app_repeat pid: 3892903 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:32.326 spdk_app_start Round 0 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3892903 /var/tmp/spdk-nbd.sock 00:05:32.326 10:33:01 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3892903 ']' 00:05:32.326 10:33:01 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.326 10:33:01 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:32.326 10:33:01 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.326 10:33:01 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:32.326 10:33:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:32.326 10:33:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.326 [2024-06-10 10:33:01.267839] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:32.326 [2024-06-10 10:33:01.267888] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892903 ] 00:05:32.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.326 [2024-06-10 10:33:01.330560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.585 [2024-06-10 10:33:01.410401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.585 [2024-06-10 10:33:01.410404] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.151 10:33:02 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:33.151 10:33:02 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:33.151 10:33:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.409 Malloc0 00:05:33.409 10:33:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.667 Malloc1 00:05:33.667 10:33:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.667 10:33:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.668 /dev/nbd0 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.668 1+0 records in 00:05:33.668 1+0 records out 00:05:33.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184448 s, 22.2 MB/s 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:33.668 10:33:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.668 10:33:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.926 /dev/nbd1 00:05:33.926 10:33:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.926 10:33:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.926 1+0 records in 00:05:33.926 1+0 records out 00:05:33.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185536 s, 22.1 MB/s 00:05:33.926 10:33:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:33.927 10:33:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:33.927 10:33:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:33.927 10:33:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:33.927 10:33:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:33.927 10:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.927 10:33:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.927 10:33:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.927 10:33:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.927 10:33:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.185 { 00:05:34.185 "nbd_device": "/dev/nbd0", 00:05:34.185 "bdev_name": "Malloc0" 00:05:34.185 }, 00:05:34.185 { 00:05:34.185 "nbd_device": "/dev/nbd1", 00:05:34.185 "bdev_name": "Malloc1" 00:05:34.185 } 00:05:34.185 ]' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.185 { 00:05:34.185 "nbd_device": "/dev/nbd0", 00:05:34.185 "bdev_name": "Malloc0" 00:05:34.185 }, 00:05:34.185 { 00:05:34.185 "nbd_device": "/dev/nbd1", 00:05:34.185 "bdev_name": "Malloc1" 00:05:34.185 } 00:05:34.185 ]' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.185 /dev/nbd1' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.185 /dev/nbd1' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.185 256+0 records in 00:05:34.185 256+0 records out 00:05:34.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00973416 s, 108 MB/s 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.185 256+0 records in 00:05:34.185 256+0 records out 00:05:34.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142003 s, 73.8 MB/s 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.185 256+0 records in 00:05:34.185 256+0 records out 00:05:34.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149338 s, 70.2 MB/s 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.185 10:33:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.444 10:33:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.704 10:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.963 10:33:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.963 10:33:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.963 10:33:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.221 [2024-06-10 10:33:04.116661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.221 [2024-06-10 10:33:04.183713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.221 [2024-06-10 10:33:04.183715] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.221 [2024-06-10 10:33:04.224052] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.222 [2024-06-10 10:33:04.224090] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.508 10:33:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.508 10:33:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:38.508 spdk_app_start Round 1 00:05:38.508 10:33:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3892903 /var/tmp/spdk-nbd.sock 00:05:38.508 10:33:06 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3892903 ']' 00:05:38.508 10:33:06 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.508 10:33:06 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:38.508 10:33:06 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.508 10:33:06 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:38.508 10:33:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.508 10:33:07 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:38.508 10:33:07 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:38.508 10:33:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.508 Malloc0 00:05:38.508 10:33:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.508 Malloc1 00:05:38.508 10:33:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.508 10:33:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.767 /dev/nbd0 00:05:38.767 10:33:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.767 10:33:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.767 1+0 records in 00:05:38.767 1+0 records out 00:05:38.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180821 s, 22.7 MB/s 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:38.767 10:33:07 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:38.767 10:33:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.767 10:33:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.767 10:33:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.026 /dev/nbd1 00:05:39.026 10:33:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.026 10:33:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.026 1+0 records in 00:05:39.026 1+0 records out 00:05:39.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222062 s, 18.4 MB/s 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:39.026 10:33:07 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:39.026 10:33:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.026 10:33:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.026 10:33:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.026 10:33:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.026 10:33:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.026 10:33:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.026 { 00:05:39.026 "nbd_device": "/dev/nbd0", 00:05:39.026 "bdev_name": "Malloc0" 00:05:39.026 }, 00:05:39.026 { 00:05:39.026 "nbd_device": "/dev/nbd1", 00:05:39.026 "bdev_name": "Malloc1" 00:05:39.026 } 00:05:39.026 ]' 00:05:39.026 10:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.026 { 00:05:39.026 "nbd_device": "/dev/nbd0", 00:05:39.026 "bdev_name": "Malloc0" 00:05:39.026 }, 00:05:39.026 { 00:05:39.026 "nbd_device": "/dev/nbd1", 00:05:39.026 "bdev_name": "Malloc1" 00:05:39.026 } 00:05:39.026 ]' 00:05:39.026 10:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.286 /dev/nbd1' 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.286 /dev/nbd1' 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.286 256+0 records in 00:05:39.286 256+0 records out 00:05:39.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00319371 s, 328 MB/s 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.286 256+0 records in 00:05:39.286 256+0 records out 00:05:39.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131598 s, 79.7 MB/s 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.286 256+0 records in 00:05:39.286 256+0 records out 00:05:39.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146619 s, 71.5 MB/s 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.286 10:33:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.545 10:33:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.831 10:33:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.831 10:33:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.089 10:33:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.348 [2024-06-10 10:33:09.143554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.348 [2024-06-10 10:33:09.219214] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.348 [2024-06-10 10:33:09.219217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.348 [2024-06-10 10:33:09.260315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.348 [2024-06-10 10:33:09.260354] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.636 10:33:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.636 10:33:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:43.636 spdk_app_start Round 2 00:05:43.636 10:33:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3892903 /var/tmp/spdk-nbd.sock 00:05:43.636 10:33:11 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3892903 ']' 00:05:43.636 10:33:11 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.636 10:33:11 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:43.636 10:33:11 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.636 10:33:11 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:43.636 10:33:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:43.636 10:33:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.636 Malloc0 00:05:43.636 10:33:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.636 Malloc1 00:05:43.636 10:33:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.636 /dev/nbd0 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.636 1+0 records in 00:05:43.636 1+0 records out 00:05:43.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180949 s, 22.6 MB/s 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:43.636 10:33:12 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.636 10:33:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.895 /dev/nbd1 00:05:43.895 10:33:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.895 10:33:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.895 1+0 records in 00:05:43.895 1+0 records out 00:05:43.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229601 s, 17.8 MB/s 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:43.895 10:33:12 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:43.895 10:33:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.895 10:33:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.895 10:33:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.895 10:33:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.895 10:33:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.154 { 00:05:44.154 "nbd_device": "/dev/nbd0", 00:05:44.154 "bdev_name": "Malloc0" 00:05:44.154 }, 00:05:44.154 { 00:05:44.154 "nbd_device": "/dev/nbd1", 00:05:44.154 "bdev_name": "Malloc1" 00:05:44.154 } 00:05:44.154 ]' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.154 { 00:05:44.154 "nbd_device": "/dev/nbd0", 00:05:44.154 "bdev_name": "Malloc0" 00:05:44.154 }, 00:05:44.154 { 00:05:44.154 "nbd_device": "/dev/nbd1", 00:05:44.154 "bdev_name": "Malloc1" 00:05:44.154 } 00:05:44.154 ]' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.154 /dev/nbd1' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.154 /dev/nbd1' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.154 256+0 records in 00:05:44.154 256+0 records out 00:05:44.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103298 s, 102 MB/s 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.154 256+0 records in 00:05:44.154 256+0 records out 00:05:44.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132996 s, 78.8 MB/s 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.154 256+0 records in 00:05:44.154 256+0 records out 00:05:44.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140462 s, 74.7 MB/s 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.154 10:33:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.155 10:33:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.413 10:33:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.672 10:33:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.931 10:33:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.931 10:33:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.931 10:33:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.190 [2024-06-10 10:33:14.105642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.190 [2024-06-10 10:33:14.170989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.190 [2024-06-10 10:33:14.170992] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.190 [2024-06-10 10:33:14.211360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.190 [2024-06-10 10:33:14.211399] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.476 10:33:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3892903 /var/tmp/spdk-nbd.sock 00:05:48.476 10:33:16 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3892903 ']' 00:05:48.476 10:33:16 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.476 10:33:16 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:48.476 10:33:16 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.476 10:33:16 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:48.476 10:33:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:48.476 10:33:17 event.app_repeat -- event/event.sh@39 -- # killprocess 3892903 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 3892903 ']' 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 3892903 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3892903 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3892903' 00:05:48.476 killing process with pid 3892903 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@968 -- # kill 3892903 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@973 -- # wait 3892903 00:05:48.476 spdk_app_start is called in Round 0. 00:05:48.476 Shutdown signal received, stop current app iteration 00:05:48.476 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:48.476 spdk_app_start is called in Round 1. 00:05:48.476 Shutdown signal received, stop current app iteration 00:05:48.476 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:48.476 spdk_app_start is called in Round 2. 00:05:48.476 Shutdown signal received, stop current app iteration 00:05:48.476 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:48.476 spdk_app_start is called in Round 3. 00:05:48.476 Shutdown signal received, stop current app iteration 00:05:48.476 10:33:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:48.476 10:33:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:48.476 00:05:48.476 real 0m16.062s 00:05:48.476 user 0m34.636s 00:05:48.476 sys 0m2.349s 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:48.476 10:33:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 ************************************ 00:05:48.476 END TEST app_repeat 00:05:48.476 ************************************ 00:05:48.476 10:33:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:48.476 10:33:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.476 10:33:17 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:48.476 10:33:17 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.476 10:33:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 ************************************ 00:05:48.476 START TEST cpu_locks 00:05:48.476 ************************************ 00:05:48.476 10:33:17 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.476 * Looking for test storage... 00:05:48.476 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:05:48.476 10:33:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:48.476 10:33:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:48.476 10:33:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:48.476 10:33:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:48.476 10:33:17 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:48.476 10:33:17 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.476 10:33:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 ************************************ 00:05:48.476 START TEST default_locks 00:05:48.476 ************************************ 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3896282 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3896282 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3896282 ']' 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:48.476 10:33:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.736 [2024-06-10 10:33:17.533292] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:48.736 [2024-06-10 10:33:17.533337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896282 ] 00:05:48.736 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.736 [2024-06-10 10:33:17.592325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.736 [2024-06-10 10:33:17.662392] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.304 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:49.304 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:05:49.304 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3896282 00:05:49.304 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3896282 00:05:49.304 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.563 lslocks: write error 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3896282 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 3896282 ']' 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 3896282 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3896282 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3896282' 00:05:49.563 killing process with pid 3896282 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 3896282 00:05:49.563 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 3896282 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3896282 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3896282 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 3896282 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3896282 ']' 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.132 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3896282) - No such process 00:05:50.132 ERROR: process (pid: 3896282) is no longer running 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.132 00:05:50.132 real 0m1.395s 00:05:50.132 user 0m1.443s 00:05:50.132 sys 0m0.455s 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.132 10:33:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.132 ************************************ 00:05:50.132 END TEST default_locks 00:05:50.132 ************************************ 00:05:50.132 10:33:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:50.132 10:33:18 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:50.132 10:33:18 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.132 10:33:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.132 ************************************ 00:05:50.132 START TEST default_locks_via_rpc 00:05:50.132 ************************************ 00:05:50.132 10:33:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:05:50.132 10:33:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3896540 00:05:50.132 10:33:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3896540 00:05:50.132 10:33:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3896540 ']' 00:05:50.132 10:33:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.132 10:33:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:50.132 10:33:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.133 10:33:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.133 10:33:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:50.133 10:33:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.133 [2024-06-10 10:33:18.988336] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:50.133 [2024-06-10 10:33:18.988377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896540 ] 00:05:50.133 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.133 [2024-06-10 10:33:19.046775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.133 [2024-06-10 10:33:19.124557] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3896540 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3896540 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3896540 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 3896540 ']' 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 3896540 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3896540 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3896540' 00:05:51.070 killing process with pid 3896540 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 3896540 00:05:51.070 10:33:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 3896540 00:05:51.329 00:05:51.329 real 0m1.304s 00:05:51.329 user 0m1.374s 00:05:51.329 sys 0m0.376s 00:05:51.329 10:33:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:51.329 10:33:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.329 ************************************ 00:05:51.329 END TEST default_locks_via_rpc 00:05:51.329 ************************************ 00:05:51.329 10:33:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.329 10:33:20 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:51.329 10:33:20 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:51.329 10:33:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.329 ************************************ 00:05:51.329 START TEST non_locking_app_on_locked_coremask 00:05:51.329 ************************************ 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3896799 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3896799 /var/tmp/spdk.sock 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3896799 ']' 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:51.329 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.329 [2024-06-10 10:33:20.343811] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:51.329 [2024-06-10 10:33:20.343847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896799 ] 00:05:51.589 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.589 [2024-06-10 10:33:20.401570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.589 [2024-06-10 10:33:20.479570] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3896818 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3896818 /var/tmp/spdk2.sock 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3896818 ']' 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:52.158 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.158 [2024-06-10 10:33:21.162524] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:52.158 [2024-06-10 10:33:21.162572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896818 ] 00:05:52.158 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.417 [2024-06-10 10:33:21.242274] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.417 [2024-06-10 10:33:21.242298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.417 [2024-06-10 10:33:21.380667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.985 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:52.985 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:52.985 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3896799 00:05:52.985 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3896799 00:05:52.985 10:33:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.251 lslocks: write error 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3896799 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3896799 ']' 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3896799 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3896799 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3896799' 00:05:53.251 killing process with pid 3896799 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3896799 00:05:53.251 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3896799 00:05:53.821 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3896818 00:05:53.821 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3896818 ']' 00:05:53.821 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3896818 00:05:53.821 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:53.821 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:53.821 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3896818 00:05:54.078 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:54.078 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:54.078 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3896818' 00:05:54.078 killing process with pid 3896818 00:05:54.078 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3896818 00:05:54.078 10:33:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3896818 00:05:54.336 00:05:54.336 real 0m2.854s 00:05:54.336 user 0m3.066s 00:05:54.336 sys 0m0.739s 00:05:54.336 10:33:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:54.336 10:33:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.336 ************************************ 00:05:54.336 END TEST non_locking_app_on_locked_coremask 00:05:54.336 ************************************ 00:05:54.336 10:33:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.336 10:33:23 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:54.336 10:33:23 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.336 10:33:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.336 ************************************ 00:05:54.336 START TEST locking_app_on_unlocked_coremask 00:05:54.336 ************************************ 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3897299 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3897299 /var/tmp/spdk.sock 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3897299 ']' 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:54.336 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.336 [2024-06-10 10:33:23.250353] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:54.336 [2024-06-10 10:33:23.250388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897299 ] 00:05:54.336 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.336 [2024-06-10 10:33:23.308915] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.336 [2024-06-10 10:33:23.308937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.595 [2024-06-10 10:33:23.387690] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3897396 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3897396 /var/tmp/spdk2.sock 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3897396 ']' 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.162 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.162 [2024-06-10 10:33:24.086237] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:55.162 [2024-06-10 10:33:24.086291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897396 ] 00:05:55.162 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.162 [2024-06-10 10:33:24.166309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.421 [2024-06-10 10:33:24.317483] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.987 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:55.987 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:55.987 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3897396 00:05:55.987 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3897396 00:05:55.987 10:33:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.554 lslocks: write error 00:05:56.554 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3897299 00:05:56.554 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3897299 ']' 00:05:56.554 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3897299 00:05:56.555 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:56.555 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:56.555 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3897299 00:05:56.555 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:56.555 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:56.555 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3897299' 00:05:56.555 killing process with pid 3897299 00:05:56.555 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3897299 00:05:56.555 10:33:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3897299 00:05:57.122 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3897396 00:05:57.122 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3897396 ']' 00:05:57.122 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3897396 00:05:57.122 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:57.122 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:57.122 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3897396 00:05:57.380 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:57.380 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:57.380 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3897396' 00:05:57.380 killing process with pid 3897396 00:05:57.380 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3897396 00:05:57.380 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3897396 00:05:57.639 00:05:57.639 real 0m3.261s 00:05:57.639 user 0m3.483s 00:05:57.639 sys 0m0.944s 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.639 ************************************ 00:05:57.639 END TEST locking_app_on_unlocked_coremask 00:05:57.639 ************************************ 00:05:57.639 10:33:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.639 10:33:26 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.639 10:33:26 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.639 10:33:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.639 ************************************ 00:05:57.639 START TEST locking_app_on_locked_coremask 00:05:57.639 ************************************ 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3897796 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3897796 /var/tmp/spdk.sock 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3897796 ']' 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:57.639 10:33:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.639 [2024-06-10 10:33:26.578985] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:57.639 [2024-06-10 10:33:26.579023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897796 ] 00:05:57.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.639 [2024-06-10 10:33:26.638681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.898 [2024-06-10 10:33:26.717266] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3898023 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3898023 /var/tmp/spdk2.sock 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3898023 /var/tmp/spdk2.sock 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3898023 /var/tmp/spdk2.sock 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3898023 ']' 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.465 10:33:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 [2024-06-10 10:33:27.395455] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:58.465 [2024-06-10 10:33:27.395502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898023 ] 00:05:58.465 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.465 [2024-06-10 10:33:27.473167] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3897796 has claimed it. 00:05:58.465 [2024-06-10 10:33:27.473197] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.032 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3898023) - No such process 00:05:59.032 ERROR: process (pid: 3898023) is no longer running 00:05:59.032 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:59.032 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:59.032 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:59.032 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:59.032 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:59.032 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:59.032 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3897796 00:05:59.033 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3897796 00:05:59.033 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.638 lslocks: write error 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3897796 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3897796 ']' 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3897796 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3897796 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3897796' 00:05:59.638 killing process with pid 3897796 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3897796 00:05:59.638 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3897796 00:05:59.897 00:05:59.897 real 0m2.250s 00:05:59.897 user 0m2.469s 00:05:59.897 sys 0m0.593s 00:05:59.897 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.897 10:33:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.897 ************************************ 00:05:59.897 END TEST locking_app_on_locked_coremask 00:05:59.897 ************************************ 00:05:59.897 10:33:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:59.897 10:33:28 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:59.897 10:33:28 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.897 10:33:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.897 ************************************ 00:05:59.897 START TEST locking_overlapped_coremask 00:05:59.897 ************************************ 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3898280 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3898280 /var/tmp/spdk.sock 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3898280 ']' 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:59.897 10:33:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.897 [2024-06-10 10:33:28.901187] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:59.897 [2024-06-10 10:33:28.901241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898280 ] 00:05:59.897 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.156 [2024-06-10 10:33:28.962100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.156 [2024-06-10 10:33:29.031447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.156 [2024-06-10 10:33:29.031546] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.156 [2024-06-10 10:33:29.031547] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3898509 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3898509 /var/tmp/spdk2.sock 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3898509 /var/tmp/spdk2.sock 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3898509 /var/tmp/spdk2.sock 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3898509 ']' 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:00.721 10:33:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.721 [2024-06-10 10:33:29.742383] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:00.721 [2024-06-10 10:33:29.742425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898509 ] 00:06:00.979 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.979 [2024-06-10 10:33:29.826064] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3898280 has claimed it. 00:06:00.979 [2024-06-10 10:33:29.826101] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.547 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3898509) - No such process 00:06:01.547 ERROR: process (pid: 3898509) is no longer running 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3898280 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 3898280 ']' 00:06:01.547 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 3898280 00:06:01.548 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:01.548 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:01.548 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3898280 00:06:01.548 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:01.548 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:01.548 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3898280' 00:06:01.548 killing process with pid 3898280 00:06:01.548 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 3898280 00:06:01.548 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 3898280 00:06:01.807 00:06:01.807 real 0m1.868s 00:06:01.807 user 0m5.258s 00:06:01.807 sys 0m0.411s 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.807 ************************************ 00:06:01.807 END TEST locking_overlapped_coremask 00:06:01.807 ************************************ 00:06:01.807 10:33:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:01.807 10:33:30 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:01.807 10:33:30 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:01.807 10:33:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.807 ************************************ 00:06:01.807 START TEST locking_overlapped_coremask_via_rpc 00:06:01.807 ************************************ 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3898615 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3898615 /var/tmp/spdk.sock 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3898615 ']' 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:01.807 10:33:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.807 [2024-06-10 10:33:30.835846] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:01.807 [2024-06-10 10:33:30.835885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898615 ] 00:06:02.065 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.065 [2024-06-10 10:33:30.894571] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.065 [2024-06-10 10:33:30.894595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.065 [2024-06-10 10:33:30.965644] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.065 [2024-06-10 10:33:30.965743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.065 [2024-06-10 10:33:30.965745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3898781 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3898781 /var/tmp/spdk2.sock 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3898781 ']' 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:02.633 10:33:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.892 [2024-06-10 10:33:31.675889] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:02.893 [2024-06-10 10:33:31.675935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898781 ] 00:06:02.893 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.893 [2024-06-10 10:33:31.758978] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.893 [2024-06-10 10:33:31.759007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.893 [2024-06-10 10:33:31.904972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.893 [2024-06-10 10:33:31.907994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.893 [2024-06-10 10:33:31.907994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.460 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.460 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:03.460 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.460 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.461 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.719 [2024-06-10 10:33:32.510026] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3898615 has claimed it. 00:06:03.719 request: 00:06:03.719 { 00:06:03.719 "method": "framework_enable_cpumask_locks", 00:06:03.719 "req_id": 1 00:06:03.719 } 00:06:03.719 Got JSON-RPC error response 00:06:03.719 response: 00:06:03.719 { 00:06:03.719 "code": -32603, 00:06:03.719 "message": "Failed to claim CPU core: 2" 00:06:03.719 } 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:03.719 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3898615 /var/tmp/spdk.sock 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3898615 ']' 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3898781 /var/tmp/spdk2.sock 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3898781 ']' 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.720 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.979 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.979 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:03.979 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:03.979 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.979 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.979 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.979 00:06:03.979 real 0m2.109s 00:06:03.979 user 0m0.864s 00:06:03.979 sys 0m0.166s 00:06:03.979 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.979 10:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.979 ************************************ 00:06:03.979 END TEST locking_overlapped_coremask_via_rpc 00:06:03.979 ************************************ 00:06:03.979 10:33:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:03.979 10:33:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3898615 ]] 00:06:03.979 10:33:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3898615 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3898615 ']' 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3898615 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3898615 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3898615' 00:06:03.979 killing process with pid 3898615 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3898615 00:06:03.979 10:33:32 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3898615 00:06:04.547 10:33:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3898781 ]] 00:06:04.547 10:33:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3898781 00:06:04.547 10:33:33 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3898781 ']' 00:06:04.547 10:33:33 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3898781 00:06:04.547 10:33:33 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:04.547 10:33:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:04.548 10:33:33 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3898781 00:06:04.548 10:33:33 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:04.548 10:33:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:04.548 10:33:33 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3898781' 00:06:04.548 killing process with pid 3898781 00:06:04.548 10:33:33 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3898781 00:06:04.548 10:33:33 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3898781 00:06:04.807 10:33:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.807 10:33:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.807 10:33:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3898615 ]] 00:06:04.807 10:33:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3898615 00:06:04.807 10:33:33 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3898615 ']' 00:06:04.807 10:33:33 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3898615 00:06:04.807 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3898615) - No such process 00:06:04.807 10:33:33 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3898615 is not found' 00:06:04.807 Process with pid 3898615 is not found 00:06:04.807 10:33:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3898781 ]] 00:06:04.807 10:33:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3898781 00:06:04.807 10:33:33 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3898781 ']' 00:06:04.807 10:33:33 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3898781 00:06:04.807 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3898781) - No such process 00:06:04.807 10:33:33 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3898781 is not found' 00:06:04.807 Process with pid 3898781 is not found 00:06:04.807 10:33:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.807 00:06:04.807 real 0m16.280s 00:06:04.807 user 0m28.460s 00:06:04.807 sys 0m4.556s 00:06:04.807 10:33:33 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.807 10:33:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.807 ************************************ 00:06:04.807 END TEST cpu_locks 00:06:04.807 ************************************ 00:06:04.807 00:06:04.807 real 0m41.512s 00:06:04.807 user 1m20.205s 00:06:04.807 sys 0m7.770s 00:06:04.807 10:33:33 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.807 10:33:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.807 ************************************ 00:06:04.807 END TEST event 00:06:04.807 ************************************ 00:06:04.807 10:33:33 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:06:04.807 10:33:33 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:04.807 10:33:33 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.807 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:06:04.807 ************************************ 00:06:04.807 START TEST thread 00:06:04.807 ************************************ 00:06:04.807 10:33:33 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:06:04.807 * Looking for test storage... 00:06:04.807 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread 00:06:04.807 10:33:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.807 10:33:33 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:04.807 10:33:33 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.807 10:33:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.067 ************************************ 00:06:05.067 START TEST thread_poller_perf 00:06:05.067 ************************************ 00:06:05.067 10:33:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.067 [2024-06-10 10:33:33.882129] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:05.067 [2024-06-10 10:33:33.882193] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899326 ] 00:06:05.067 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.067 [2024-06-10 10:33:33.945500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.067 [2024-06-10 10:33:34.015874] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.067 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.444 ====================================== 00:06:06.444 busy:2107899752 (cyc) 00:06:06.444 total_run_count: 428000 00:06:06.444 tsc_hz: 2100000000 (cyc) 00:06:06.444 ====================================== 00:06:06.444 poller_cost: 4924 (cyc), 2344 (nsec) 00:06:06.444 00:06:06.444 real 0m1.228s 00:06:06.444 user 0m1.150s 00:06:06.444 sys 0m0.074s 00:06:06.444 10:33:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.444 10:33:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.444 ************************************ 00:06:06.444 END TEST thread_poller_perf 00:06:06.444 ************************************ 00:06:06.444 10:33:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.444 10:33:35 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:06.444 10:33:35 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.444 10:33:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.444 ************************************ 00:06:06.444 START TEST thread_poller_perf 00:06:06.444 ************************************ 00:06:06.444 10:33:35 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.444 [2024-06-10 10:33:35.165929] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:06.444 [2024-06-10 10:33:35.165984] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899570 ] 00:06:06.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.444 [2024-06-10 10:33:35.224473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.444 [2024-06-10 10:33:35.294661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.444 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.380 ====================================== 00:06:07.380 busy:2101633462 (cyc) 00:06:07.380 total_run_count: 5596000 00:06:07.380 tsc_hz: 2100000000 (cyc) 00:06:07.380 ====================================== 00:06:07.380 poller_cost: 375 (cyc), 178 (nsec) 00:06:07.380 00:06:07.380 real 0m1.207s 00:06:07.380 user 0m1.130s 00:06:07.380 sys 0m0.073s 00:06:07.380 10:33:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:07.380 10:33:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.380 ************************************ 00:06:07.380 END TEST thread_poller_perf 00:06:07.380 ************************************ 00:06:07.380 10:33:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.380 00:06:07.380 real 0m2.650s 00:06:07.380 user 0m2.357s 00:06:07.380 sys 0m0.301s 00:06:07.380 10:33:36 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:07.380 10:33:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.380 ************************************ 00:06:07.380 END TEST thread 00:06:07.380 ************************************ 00:06:07.639 10:33:36 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel.sh 00:06:07.639 10:33:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:07.639 10:33:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:07.639 10:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:07.639 ************************************ 00:06:07.639 START TEST accel 00:06:07.639 ************************************ 00:06:07.639 10:33:36 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel.sh 00:06:07.639 * Looking for test storage... 00:06:07.639 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel 00:06:07.639 10:33:36 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:07.639 10:33:36 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:07.639 10:33:36 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.639 10:33:36 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3899860 00:06:07.639 10:33:36 accel -- accel/accel.sh@63 -- # waitforlisten 3899860 00:06:07.639 10:33:36 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:07.639 10:33:36 accel -- common/autotest_common.sh@830 -- # '[' -z 3899860 ']' 00:06:07.639 10:33:36 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.639 10:33:36 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:07.639 10:33:36 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:07.639 10:33:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.639 10:33:36 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.639 10:33:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.639 10:33:36 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:07.639 10:33:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.639 10:33:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.639 10:33:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.639 10:33:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.639 10:33:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:07.639 10:33:36 accel -- accel/accel.sh@41 -- # jq -r . 00:06:07.639 [2024-06-10 10:33:36.598585] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:07.639 [2024-06-10 10:33:36.598626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899860 ] 00:06:07.639 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.639 [2024-06-10 10:33:36.658233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.899 [2024-06-10 10:33:36.731033] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@863 -- # return 0 00:06:08.467 10:33:37 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:08.467 10:33:37 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:08.467 10:33:37 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:08.467 10:33:37 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:08.467 10:33:37 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:08.467 10:33:37 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:08.467 10:33:37 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:08.467 10:33:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:08.467 10:33:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:08.467 10:33:37 accel -- accel/accel.sh@75 -- # killprocess 3899860 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@949 -- # '[' -z 3899860 ']' 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@953 -- # kill -0 3899860 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@954 -- # uname 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:08.467 10:33:37 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3899860 00:06:08.468 10:33:37 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:08.468 10:33:37 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:08.468 10:33:37 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3899860' 00:06:08.468 killing process with pid 3899860 00:06:08.468 10:33:37 accel -- common/autotest_common.sh@968 -- # kill 3899860 00:06:08.468 10:33:37 accel -- common/autotest_common.sh@973 -- # wait 3899860 00:06:09.036 10:33:37 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:09.036 10:33:37 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:09.036 10:33:37 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:09.036 10:33:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.036 10:33:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.036 10:33:37 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:09.036 10:33:37 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:09.036 10:33:37 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:09.036 10:33:37 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.036 10:33:37 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.036 10:33:37 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.036 10:33:37 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.036 10:33:37 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.037 10:33:37 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:09.037 10:33:37 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:09.037 10:33:37 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.037 10:33:37 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:09.037 10:33:37 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:09.037 10:33:37 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:09.037 10:33:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.037 10:33:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.037 ************************************ 00:06:09.037 START TEST accel_missing_filename 00:06:09.037 ************************************ 00:06:09.037 10:33:37 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:09.037 10:33:37 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:09.037 10:33:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:09.037 10:33:37 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:09.037 10:33:37 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.037 10:33:37 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:09.037 10:33:37 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.037 10:33:37 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:09.037 10:33:37 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:09.037 [2024-06-10 10:33:37.943748] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:09.037 [2024-06-10 10:33:37.943796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900125 ] 00:06:09.037 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.037 [2024-06-10 10:33:38.003312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.296 [2024-06-10 10:33:38.075529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.296 [2024-06-10 10:33:38.115042] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.296 [2024-06-10 10:33:38.174385] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:09.296 A filename is required. 00:06:09.296 10:33:38 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:09.296 10:33:38 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.296 10:33:38 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:09.296 10:33:38 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:09.296 10:33:38 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:09.296 10:33:38 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.296 00:06:09.296 real 0m0.325s 00:06:09.296 user 0m0.252s 00:06:09.296 sys 0m0.111s 00:06:09.296 10:33:38 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.296 10:33:38 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:09.296 ************************************ 00:06:09.296 END TEST accel_missing_filename 00:06:09.296 ************************************ 00:06:09.296 10:33:38 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:09.296 10:33:38 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:09.296 10:33:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.296 10:33:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.296 ************************************ 00:06:09.296 START TEST accel_compress_verify 00:06:09.296 ************************************ 00:06:09.296 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:09.296 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:09.296 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:09.296 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:09.296 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.296 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:09.296 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.296 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:09.296 10:33:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:09.296 [2024-06-10 10:33:38.313592] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:09.296 [2024-06-10 10:33:38.313656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900150 ] 00:06:09.555 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.555 [2024-06-10 10:33:38.375088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.555 [2024-06-10 10:33:38.445791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.555 [2024-06-10 10:33:38.485917] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.555 [2024-06-10 10:33:38.545586] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:09.815 00:06:09.815 Compression does not support the verify option, aborting. 00:06:09.815 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:09.815 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.815 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:09.815 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:09.815 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:09.815 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.815 00:06:09.815 real 0m0.328s 00:06:09.815 user 0m0.246s 00:06:09.815 sys 0m0.123s 00:06:09.815 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.815 10:33:38 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:09.815 ************************************ 00:06:09.815 END TEST accel_compress_verify 00:06:09.815 ************************************ 00:06:09.815 10:33:38 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:09.815 10:33:38 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:09.815 10:33:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.815 10:33:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.815 ************************************ 00:06:09.815 START TEST accel_wrong_workload 00:06:09.815 ************************************ 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:09.815 10:33:38 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:09.815 Unsupported workload type: foobar 00:06:09.815 [2024-06-10 10:33:38.687774] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:09.815 accel_perf options: 00:06:09.815 [-h help message] 00:06:09.815 [-q queue depth per core] 00:06:09.815 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:09.815 [-T number of threads per core 00:06:09.815 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:09.815 [-t time in seconds] 00:06:09.815 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:09.815 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:09.815 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:09.815 [-l for compress/decompress workloads, name of uncompressed input file 00:06:09.815 [-S for crc32c workload, use this seed value (default 0) 00:06:09.815 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:09.815 [-f for fill workload, use this BYTE value (default 255) 00:06:09.815 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:09.815 [-y verify result if this switch is on] 00:06:09.815 [-a tasks to allocate per core (default: same value as -q)] 00:06:09.815 Can be used to spread operations across a wider range of memory. 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.815 00:06:09.815 real 0m0.024s 00:06:09.815 user 0m0.018s 00:06:09.815 sys 0m0.006s 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.815 10:33:38 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:09.815 ************************************ 00:06:09.815 END TEST accel_wrong_workload 00:06:09.815 ************************************ 00:06:09.815 Error: writing output failed: Broken pipe 00:06:09.815 10:33:38 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:09.815 10:33:38 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:09.815 10:33:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.815 10:33:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.815 ************************************ 00:06:09.815 START TEST accel_negative_buffers 00:06:09.815 ************************************ 00:06:09.815 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:09.815 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:09.815 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:09.815 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:09.815 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.815 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:09.815 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.815 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:09.815 10:33:38 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:09.815 -x option must be non-negative. 00:06:09.815 [2024-06-10 10:33:38.788880] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:09.815 accel_perf options: 00:06:09.815 [-h help message] 00:06:09.815 [-q queue depth per core] 00:06:09.815 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:09.815 [-T number of threads per core 00:06:09.815 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:09.815 [-t time in seconds] 00:06:09.815 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:09.815 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:09.815 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:09.816 [-l for compress/decompress workloads, name of uncompressed input file 00:06:09.816 [-S for crc32c workload, use this seed value (default 0) 00:06:09.816 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:09.816 [-f for fill workload, use this BYTE value (default 255) 00:06:09.816 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:09.816 [-y verify result if this switch is on] 00:06:09.816 [-a tasks to allocate per core (default: same value as -q)] 00:06:09.816 Can be used to spread operations across a wider range of memory. 00:06:09.816 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:09.816 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.816 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:09.816 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.816 00:06:09.816 real 0m0.035s 00:06:09.816 user 0m0.022s 00:06:09.816 sys 0m0.013s 00:06:09.816 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.816 10:33:38 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:09.816 ************************************ 00:06:09.816 END TEST accel_negative_buffers 00:06:09.816 ************************************ 00:06:09.816 Error: writing output failed: Broken pipe 00:06:09.816 10:33:38 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:09.816 10:33:38 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:09.816 10:33:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.816 10:33:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.075 ************************************ 00:06:10.075 START TEST accel_crc32c 00:06:10.075 ************************************ 00:06:10.075 10:33:38 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:10.075 10:33:38 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:10.075 [2024-06-10 10:33:38.876651] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:10.075 [2024-06-10 10:33:38.876703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900219 ] 00:06:10.075 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.075 [2024-06-10 10:33:38.940020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.075 [2024-06-10 10:33:39.011817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.076 10:33:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:11.458 10:33:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.458 00:06:11.458 real 0m1.331s 00:06:11.458 user 0m1.219s 00:06:11.458 sys 0m0.117s 00:06:11.458 10:33:40 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.458 10:33:40 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:11.458 ************************************ 00:06:11.458 END TEST accel_crc32c 00:06:11.458 ************************************ 00:06:11.458 10:33:40 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:11.458 10:33:40 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:11.458 10:33:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.458 10:33:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.458 ************************************ 00:06:11.458 START TEST accel_crc32c_C2 00:06:11.458 ************************************ 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:11.458 [2024-06-10 10:33:40.267466] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:11.458 [2024-06-10 10:33:40.267513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900471 ] 00:06:11.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.458 [2024-06-10 10:33:40.326298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.458 [2024-06-10 10:33:40.397788] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.458 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.459 10:33:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.837 00:06:12.837 real 0m1.328s 00:06:12.837 user 0m1.225s 00:06:12.837 sys 0m0.108s 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.837 10:33:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:12.837 ************************************ 00:06:12.837 END TEST accel_crc32c_C2 00:06:12.837 ************************************ 00:06:12.837 10:33:41 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:12.837 10:33:41 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:12.837 10:33:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.837 10:33:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.837 ************************************ 00:06:12.837 START TEST accel_copy 00:06:12.837 ************************************ 00:06:12.837 10:33:41 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:12.837 [2024-06-10 10:33:41.652654] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:12.837 [2024-06-10 10:33:41.652707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900722 ] 00:06:12.837 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.837 [2024-06-10 10:33:41.713501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.837 [2024-06-10 10:33:41.786777] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:12.837 10:33:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:14.244 10:33:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.244 00:06:14.244 real 0m1.334s 00:06:14.244 user 0m1.223s 00:06:14.244 sys 0m0.116s 00:06:14.244 10:33:42 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.244 10:33:42 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:14.244 ************************************ 00:06:14.244 END TEST accel_copy 00:06:14.244 ************************************ 00:06:14.244 10:33:42 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.244 10:33:42 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:14.244 10:33:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.244 10:33:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.244 ************************************ 00:06:14.244 START TEST accel_fill 00:06:14.244 ************************************ 00:06:14.244 10:33:43 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:14.244 [2024-06-10 10:33:43.045133] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:14.244 [2024-06-10 10:33:43.045196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900991 ] 00:06:14.244 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.244 [2024-06-10 10:33:43.103820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.244 [2024-06-10 10:33:43.174728] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.244 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:14.245 10:33:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:15.624 10:33:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.624 00:06:15.624 real 0m1.327s 00:06:15.624 user 0m1.219s 00:06:15.624 sys 0m0.113s 00:06:15.624 10:33:44 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.624 10:33:44 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:15.624 ************************************ 00:06:15.624 END TEST accel_fill 00:06:15.624 ************************************ 00:06:15.624 10:33:44 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:15.624 10:33:44 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:15.624 10:33:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.624 10:33:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.624 ************************************ 00:06:15.624 START TEST accel_copy_crc32c 00:06:15.624 ************************************ 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:15.624 [2024-06-10 10:33:44.428680] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:15.624 [2024-06-10 10:33:44.428749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901245 ] 00:06:15.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.624 [2024-06-10 10:33:44.488684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.624 [2024-06-10 10:33:44.563504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.624 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.625 10:33:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.002 00:06:17.002 real 0m1.334s 00:06:17.002 user 0m1.227s 00:06:17.002 sys 0m0.113s 00:06:17.002 10:33:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.003 10:33:45 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:17.003 ************************************ 00:06:17.003 END TEST accel_copy_crc32c 00:06:17.003 ************************************ 00:06:17.003 10:33:45 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:17.003 10:33:45 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:17.003 10:33:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.003 10:33:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.003 ************************************ 00:06:17.003 START TEST accel_copy_crc32c_C2 00:06:17.003 ************************************ 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:17.003 [2024-06-10 10:33:45.823042] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:17.003 [2024-06-10 10:33:45.823107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901517 ] 00:06:17.003 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.003 [2024-06-10 10:33:45.883555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.003 [2024-06-10 10:33:45.955449] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.003 10:33:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.381 00:06:18.381 real 0m1.333s 00:06:18.381 user 0m1.218s 00:06:18.381 sys 0m0.120s 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.381 10:33:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 ************************************ 00:06:18.381 END TEST accel_copy_crc32c_C2 00:06:18.381 ************************************ 00:06:18.381 10:33:47 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:18.381 10:33:47 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:18.381 10:33:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.381 10:33:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.381 ************************************ 00:06:18.381 START TEST accel_dualcast 00:06:18.381 ************************************ 00:06:18.381 10:33:47 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:18.381 [2024-06-10 10:33:47.209431] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:18.381 [2024-06-10 10:33:47.209476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901769 ] 00:06:18.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.381 [2024-06-10 10:33:47.268622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.381 [2024-06-10 10:33:47.339171] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:18.381 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:18.382 10:33:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:19.759 10:33:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.759 00:06:19.759 real 0m1.329s 00:06:19.759 user 0m1.215s 00:06:19.759 sys 0m0.118s 00:06:19.759 10:33:48 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:19.759 10:33:48 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:19.759 ************************************ 00:06:19.759 END TEST accel_dualcast 00:06:19.759 ************************************ 00:06:19.759 10:33:48 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:19.759 10:33:48 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:19.759 10:33:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:19.759 10:33:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.759 ************************************ 00:06:19.759 START TEST accel_compare 00:06:19.759 ************************************ 00:06:19.759 10:33:48 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:19.759 10:33:48 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:19.759 [2024-06-10 10:33:48.584407] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:19.759 [2024-06-10 10:33:48.584453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902019 ] 00:06:19.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.760 [2024-06-10 10:33:48.643088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.760 [2024-06-10 10:33:48.714081] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:19.760 10:33:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:21.137 10:33:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.137 00:06:21.137 real 0m1.330s 00:06:21.137 user 0m1.220s 00:06:21.137 sys 0m0.116s 00:06:21.137 10:33:49 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.137 10:33:49 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:21.137 ************************************ 00:06:21.137 END TEST accel_compare 00:06:21.137 ************************************ 00:06:21.137 10:33:49 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:21.137 10:33:49 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:21.137 10:33:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.137 10:33:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.137 ************************************ 00:06:21.137 START TEST accel_xor 00:06:21.137 ************************************ 00:06:21.137 10:33:49 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:21.137 10:33:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:21.137 [2024-06-10 10:33:49.972168] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:21.137 [2024-06-10 10:33:49.972220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902272 ] 00:06:21.137 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.137 [2024-06-10 10:33:50.034256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.137 [2024-06-10 10:33:50.112135] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:21.137 10:33:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.513 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.514 00:06:22.514 real 0m1.339s 00:06:22.514 user 0m1.227s 00:06:22.514 sys 0m0.117s 00:06:22.514 10:33:51 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.514 10:33:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:22.514 ************************************ 00:06:22.514 END TEST accel_xor 00:06:22.514 ************************************ 00:06:22.514 10:33:51 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:22.514 10:33:51 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:22.514 10:33:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.514 10:33:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.514 ************************************ 00:06:22.514 START TEST accel_xor 00:06:22.514 ************************************ 00:06:22.514 10:33:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:22.514 [2024-06-10 10:33:51.362550] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:22.514 [2024-06-10 10:33:51.362605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902536 ] 00:06:22.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.514 [2024-06-10 10:33:51.422154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.514 [2024-06-10 10:33:51.493172] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.514 10:33:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:23.891 10:33:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.891 00:06:23.891 real 0m1.322s 00:06:23.891 user 0m1.208s 00:06:23.891 sys 0m0.119s 00:06:23.891 10:33:52 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:23.891 10:33:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:23.891 ************************************ 00:06:23.891 END TEST accel_xor 00:06:23.891 ************************************ 00:06:23.891 10:33:52 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:23.891 10:33:52 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:23.891 10:33:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.891 10:33:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.891 ************************************ 00:06:23.891 START TEST accel_dif_verify 00:06:23.891 ************************************ 00:06:23.891 10:33:52 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:23.891 [2024-06-10 10:33:52.743325] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:23.891 [2024-06-10 10:33:52.743380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902787 ] 00:06:23.891 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.891 [2024-06-10 10:33:52.803893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.891 [2024-06-10 10:33:52.874942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.891 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.892 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:23.892 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.892 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:23.892 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:23.892 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:23.892 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:23.892 10:33:52 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:23.892 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:24.150 10:33:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:25.087 10:33:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.087 00:06:25.087 real 0m1.326s 00:06:25.087 user 0m1.217s 00:06:25.087 sys 0m0.115s 00:06:25.087 10:33:54 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.087 10:33:54 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:25.087 ************************************ 00:06:25.087 END TEST accel_dif_verify 00:06:25.087 ************************************ 00:06:25.087 10:33:54 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:25.087 10:33:54 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:25.087 10:33:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.087 10:33:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.087 ************************************ 00:06:25.087 START TEST accel_dif_generate 00:06:25.087 ************************************ 00:06:25.087 10:33:54 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:25.087 10:33:54 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:25.346 [2024-06-10 10:33:54.128542] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:25.346 [2024-06-10 10:33:54.128596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903043 ] 00:06:25.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.346 [2024-06-10 10:33:54.190587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.346 [2024-06-10 10:33:54.262317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.346 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:25.347 10:33:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:26.723 10:33:55 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.723 00:06:26.723 real 0m1.330s 00:06:26.723 user 0m1.225s 00:06:26.723 sys 0m0.110s 00:06:26.723 10:33:55 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.723 10:33:55 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:26.723 ************************************ 00:06:26.723 END TEST accel_dif_generate 00:06:26.723 ************************************ 00:06:26.723 10:33:55 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:26.724 10:33:55 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:26.724 10:33:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.724 10:33:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.724 ************************************ 00:06:26.724 START TEST accel_dif_generate_copy 00:06:26.724 ************************************ 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:26.724 [2024-06-10 10:33:55.514677] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:26.724 [2024-06-10 10:33:55.514724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903302 ] 00:06:26.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.724 [2024-06-10 10:33:55.573945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.724 [2024-06-10 10:33:55.644302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.724 10:33:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.102 00:06:28.102 real 0m1.324s 00:06:28.102 user 0m1.214s 00:06:28.102 sys 0m0.116s 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:28.102 10:33:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:28.102 ************************************ 00:06:28.102 END TEST accel_dif_generate_copy 00:06:28.102 ************************************ 00:06:28.102 10:33:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:28.102 10:33:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:28.102 10:33:56 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:28.102 10:33:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:28.102 10:33:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.102 ************************************ 00:06:28.102 START TEST accel_comp 00:06:28.102 ************************************ 00:06:28.102 10:33:56 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:28.102 10:33:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:28.102 [2024-06-10 10:33:56.900542] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:28.102 [2024-06-10 10:33:56.900596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903572 ] 00:06:28.102 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.102 [2024-06-10 10:33:56.961262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.102 [2024-06-10 10:33:57.032501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.102 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:28.103 10:33:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:29.481 10:33:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.481 00:06:29.481 real 0m1.330s 00:06:29.481 user 0m0.013s 00:06:29.481 sys 0m0.001s 00:06:29.481 10:33:58 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.481 10:33:58 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:29.481 ************************************ 00:06:29.481 END TEST accel_comp 00:06:29.481 ************************************ 00:06:29.481 10:33:58 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:29.481 10:33:58 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:29.481 10:33:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.481 10:33:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.481 ************************************ 00:06:29.481 START TEST accel_decomp 00:06:29.481 ************************************ 00:06:29.481 10:33:58 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:29.481 [2024-06-10 10:33:58.289975] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:29.481 [2024-06-10 10:33:58.290041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903830 ] 00:06:29.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.481 [2024-06-10 10:33:58.349934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.481 [2024-06-10 10:33:58.421622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.481 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:29.482 10:33:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.860 10:33:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.860 00:06:30.860 real 0m1.333s 00:06:30.860 user 0m1.225s 00:06:30.860 sys 0m0.114s 00:06:30.860 10:33:59 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.860 10:33:59 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:30.860 ************************************ 00:06:30.860 END TEST accel_decomp 00:06:30.860 ************************************ 00:06:30.860 10:33:59 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.860 10:33:59 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:30.860 10:33:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.860 10:33:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.860 ************************************ 00:06:30.860 START TEST accel_decomp_full 00:06:30.860 ************************************ 00:06:30.860 10:33:59 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:30.860 [2024-06-10 10:33:59.679307] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:30.860 [2024-06-10 10:33:59.679380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904088 ] 00:06:30.860 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.860 [2024-06-10 10:33:59.742129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.860 [2024-06-10 10:33:59.812655] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.860 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:30.861 10:33:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.237 10:34:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.237 00:06:32.237 real 0m1.341s 00:06:32.237 user 0m1.226s 00:06:32.237 sys 0m0.120s 00:06:32.237 10:34:00 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.237 10:34:00 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:32.237 ************************************ 00:06:32.237 END TEST accel_decomp_full 00:06:32.237 ************************************ 00:06:32.237 10:34:01 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:32.237 10:34:01 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:32.237 10:34:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.237 10:34:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.237 ************************************ 00:06:32.237 START TEST accel_decomp_mcore 00:06:32.237 ************************************ 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:32.237 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:32.237 [2024-06-10 10:34:01.080223] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:32.237 [2024-06-10 10:34:01.080288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904358 ] 00:06:32.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.237 [2024-06-10 10:34:01.140352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.237 [2024-06-10 10:34:01.213701] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.237 [2024-06-10 10:34:01.213801] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.237 [2024-06-10 10:34:01.213893] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.238 [2024-06-10 10:34:01.213895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:32.238 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 10:34:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.479 00:06:33.479 real 0m1.348s 00:06:33.479 user 0m4.557s 00:06:33.479 sys 0m0.130s 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.479 10:34:02 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:33.479 ************************************ 00:06:33.479 END TEST accel_decomp_mcore 00:06:33.479 ************************************ 00:06:33.479 10:34:02 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.479 10:34:02 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:33.479 10:34:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.479 10:34:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.479 ************************************ 00:06:33.479 START TEST accel_decomp_full_mcore 00:06:33.479 ************************************ 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:33.479 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:33.479 [2024-06-10 10:34:02.494055] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:33.479 [2024-06-10 10:34:02.494118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904621 ] 00:06:33.738 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.738 [2024-06-10 10:34:02.555011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.738 [2024-06-10 10:34:02.630375] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.738 [2024-06-10 10:34:02.630472] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.738 [2024-06-10 10:34:02.630560] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.738 [2024-06-10 10:34:02.630562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.738 10:34:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.125 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.125 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.125 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.125 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.125 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.125 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.125 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.125 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.126 00:06:35.126 real 0m1.366s 00:06:35.126 user 0m4.617s 00:06:35.126 sys 0m0.124s 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.126 10:34:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:35.126 ************************************ 00:06:35.126 END TEST accel_decomp_full_mcore 00:06:35.126 ************************************ 00:06:35.126 10:34:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.126 10:34:03 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:35.126 10:34:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.126 10:34:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.126 ************************************ 00:06:35.126 START TEST accel_decomp_mthread 00:06:35.126 ************************************ 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:35.126 10:34:03 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:35.126 [2024-06-10 10:34:03.928608] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:35.126 [2024-06-10 10:34:03.928658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904877 ] 00:06:35.126 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.126 [2024-06-10 10:34:03.990744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.126 [2024-06-10 10:34:04.060087] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:35.126 10:34:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.505 00:06:36.505 real 0m1.343s 00:06:36.505 user 0m1.232s 00:06:36.505 sys 0m0.123s 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.505 10:34:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:36.505 ************************************ 00:06:36.505 END TEST accel_decomp_mthread 00:06:36.505 ************************************ 00:06:36.505 10:34:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.505 10:34:05 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:36.505 10:34:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.505 10:34:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.505 ************************************ 00:06:36.505 START TEST accel_decomp_full_mthread 00:06:36.505 ************************************ 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:36.505 [2024-06-10 10:34:05.336159] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:36.505 [2024-06-10 10:34:05.336208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905119 ] 00:06:36.505 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.505 [2024-06-10 10:34:05.395489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.505 [2024-06-10 10:34:05.466702] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.505 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.506 10:34:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.885 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.886 00:06:37.886 real 0m1.362s 00:06:37.886 user 0m1.258s 00:06:37.886 sys 0m0.116s 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.886 10:34:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:37.886 ************************************ 00:06:37.886 END TEST accel_decomp_full_mthread 00:06:37.886 ************************************ 00:06:37.886 10:34:06 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:37.886 10:34:06 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.886 10:34:06 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:37.886 10:34:06 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:37.886 10:34:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.886 10:34:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.886 10:34:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.886 10:34:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.886 10:34:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.886 10:34:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.886 10:34:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.886 10:34:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:37.886 10:34:06 accel -- accel/accel.sh@41 -- # jq -r . 00:06:37.886 ************************************ 00:06:37.886 START TEST accel_dif_functional_tests 00:06:37.886 ************************************ 00:06:37.886 10:34:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.886 [2024-06-10 10:34:06.777065] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:37.886 [2024-06-10 10:34:06.777100] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905368 ] 00:06:37.886 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.886 [2024-06-10 10:34:06.835138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.886 [2024-06-10 10:34:06.907141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.886 [2024-06-10 10:34:06.907238] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.886 [2024-06-10 10:34:06.907240] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.145 00:06:38.145 00:06:38.145 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.145 http://cunit.sourceforge.net/ 00:06:38.145 00:06:38.145 00:06:38.145 Suite: accel_dif 00:06:38.145 Test: verify: DIF generated, GUARD check ...passed 00:06:38.145 Test: verify: DIF generated, APPTAG check ...passed 00:06:38.145 Test: verify: DIF generated, REFTAG check ...passed 00:06:38.145 Test: verify: DIF not generated, GUARD check ...[2024-06-10 10:34:06.972895] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.145 passed 00:06:38.145 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 10:34:06.972943] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.145 passed 00:06:38.145 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 10:34:06.972981] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.145 passed 00:06:38.145 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:38.145 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 10:34:06.973022] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:38.145 passed 00:06:38.145 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:38.145 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:38.145 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:38.145 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 10:34:06.973116] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:38.145 passed 00:06:38.145 Test: verify copy: DIF generated, GUARD check ...passed 00:06:38.145 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:38.145 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:38.145 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 10:34:06.973220] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:38.145 passed 00:06:38.145 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 10:34:06.973247] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:38.145 passed 00:06:38.145 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 10:34:06.973265] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:38.145 passed 00:06:38.145 Test: generate copy: DIF generated, GUARD check ...passed 00:06:38.145 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:38.145 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:38.145 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:38.145 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:38.145 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:38.145 Test: generate copy: iovecs-len validate ...[2024-06-10 10:34:06.973422] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:38.145 passed 00:06:38.145 Test: generate copy: buffer alignment validate ...passed 00:06:38.145 00:06:38.145 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.145 suites 1 1 n/a 0 0 00:06:38.145 tests 26 26 26 0 0 00:06:38.145 asserts 115 115 115 0 n/a 00:06:38.145 00:06:38.145 Elapsed time = 0.002 seconds 00:06:38.145 00:06:38.145 real 0m0.400s 00:06:38.145 user 0m0.606s 00:06:38.145 sys 0m0.142s 00:06:38.145 10:34:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.145 10:34:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:38.145 ************************************ 00:06:38.145 END TEST accel_dif_functional_tests 00:06:38.145 ************************************ 00:06:38.145 00:06:38.145 real 0m30.705s 00:06:38.146 user 0m34.476s 00:06:38.146 sys 0m4.143s 00:06:38.146 10:34:07 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.146 10:34:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.146 ************************************ 00:06:38.146 END TEST accel 00:06:38.146 ************************************ 00:06:38.405 10:34:07 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.405 10:34:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:38.405 10:34:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.405 10:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:38.405 ************************************ 00:06:38.405 START TEST accel_rpc 00:06:38.405 ************************************ 00:06:38.405 10:34:07 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:38.405 * Looking for test storage... 00:06:38.405 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel 00:06:38.405 10:34:07 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.405 10:34:07 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3905440 00:06:38.405 10:34:07 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3905440 00:06:38.405 10:34:07 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:38.405 10:34:07 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 3905440 ']' 00:06:38.405 10:34:07 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.405 10:34:07 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:38.405 10:34:07 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.405 10:34:07 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:38.405 10:34:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.405 [2024-06-10 10:34:07.374319] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:38.405 [2024-06-10 10:34:07.374368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905440 ] 00:06:38.405 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.405 [2024-06-10 10:34:07.435003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.664 [2024-06-10 10:34:07.510606] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.233 10:34:08 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:39.233 10:34:08 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:39.233 10:34:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:39.233 10:34:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:39.233 10:34:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:39.233 10:34:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:39.233 10:34:08 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:39.233 10:34:08 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:39.233 10:34:08 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.233 10:34:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.233 ************************************ 00:06:39.233 START TEST accel_assign_opcode 00:06:39.233 ************************************ 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.233 [2024-06-10 10:34:08.196629] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.233 [2024-06-10 10:34:08.204640] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:39.233 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:39.492 software 00:06:39.492 00:06:39.492 real 0m0.229s 00:06:39.492 user 0m0.043s 00:06:39.492 sys 0m0.004s 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.492 10:34:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:39.492 ************************************ 00:06:39.492 END TEST accel_assign_opcode 00:06:39.492 ************************************ 00:06:39.492 10:34:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3905440 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 3905440 ']' 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 3905440 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3905440 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3905440' 00:06:39.492 killing process with pid 3905440 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@968 -- # kill 3905440 00:06:39.492 10:34:08 accel_rpc -- common/autotest_common.sh@973 -- # wait 3905440 00:06:40.061 00:06:40.061 real 0m1.557s 00:06:40.061 user 0m1.620s 00:06:40.061 sys 0m0.406s 00:06:40.061 10:34:08 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.061 10:34:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.061 ************************************ 00:06:40.061 END TEST accel_rpc 00:06:40.061 ************************************ 00:06:40.061 10:34:08 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.061 10:34:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:40.061 10:34:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.061 10:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:40.061 ************************************ 00:06:40.061 START TEST app_cmdline 00:06:40.061 ************************************ 00:06:40.061 10:34:08 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.061 * Looking for test storage... 00:06:40.061 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:06:40.061 10:34:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:40.061 10:34:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3905746 00:06:40.061 10:34:08 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:40.061 10:34:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3905746 00:06:40.061 10:34:08 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 3905746 ']' 00:06:40.061 10:34:08 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.061 10:34:08 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:40.061 10:34:08 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.061 10:34:08 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:40.061 10:34:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.061 [2024-06-10 10:34:08.984974] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:40.061 [2024-06-10 10:34:08.985028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905746 ] 00:06:40.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.061 [2024-06-10 10:34:09.043132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.320 [2024-06-10 10:34:09.122151] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.888 10:34:09 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:40.888 10:34:09 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:40.889 10:34:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:41.148 { 00:06:41.148 "version": "SPDK v24.09-pre git sha1 e55c9a812", 00:06:41.148 "fields": { 00:06:41.148 "major": 24, 00:06:41.148 "minor": 9, 00:06:41.148 "patch": 0, 00:06:41.148 "suffix": "-pre", 00:06:41.148 "commit": "e55c9a812" 00:06:41.148 } 00:06:41.148 } 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:41.148 10:34:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:06:41.148 10:34:09 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.148 request: 00:06:41.148 { 00:06:41.148 "method": "env_dpdk_get_mem_stats", 00:06:41.148 "req_id": 1 00:06:41.148 } 00:06:41.148 Got JSON-RPC error response 00:06:41.148 response: 00:06:41.148 { 00:06:41.148 "code": -32601, 00:06:41.148 "message": "Method not found" 00:06:41.148 } 00:06:41.148 10:34:10 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:41.148 10:34:10 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:41.148 10:34:10 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:41.148 10:34:10 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:41.148 10:34:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3905746 00:06:41.148 10:34:10 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 3905746 ']' 00:06:41.148 10:34:10 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 3905746 00:06:41.148 10:34:10 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:41.407 10:34:10 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:41.407 10:34:10 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3905746 00:06:41.407 10:34:10 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:41.407 10:34:10 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:41.407 10:34:10 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3905746' 00:06:41.407 killing process with pid 3905746 00:06:41.407 10:34:10 app_cmdline -- common/autotest_common.sh@968 -- # kill 3905746 00:06:41.407 10:34:10 app_cmdline -- common/autotest_common.sh@973 -- # wait 3905746 00:06:41.667 00:06:41.667 real 0m1.667s 00:06:41.667 user 0m1.978s 00:06:41.667 sys 0m0.432s 00:06:41.667 10:34:10 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.667 10:34:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.667 ************************************ 00:06:41.667 END TEST app_cmdline 00:06:41.667 ************************************ 00:06:41.667 10:34:10 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:06:41.667 10:34:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:41.667 10:34:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.667 10:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.667 ************************************ 00:06:41.667 START TEST version 00:06:41.667 ************************************ 00:06:41.667 10:34:10 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:06:41.667 * Looking for test storage... 00:06:41.667 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:06:41.667 10:34:10 version -- app/version.sh@17 -- # get_header_version major 00:06:41.667 10:34:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:41.667 10:34:10 version -- app/version.sh@14 -- # cut -f2 00:06:41.667 10:34:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.667 10:34:10 version -- app/version.sh@17 -- # major=24 00:06:41.667 10:34:10 version -- app/version.sh@18 -- # get_header_version minor 00:06:41.667 10:34:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:41.667 10:34:10 version -- app/version.sh@14 -- # cut -f2 00:06:41.667 10:34:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.667 10:34:10 version -- app/version.sh@18 -- # minor=9 00:06:41.667 10:34:10 version -- app/version.sh@19 -- # get_header_version patch 00:06:41.667 10:34:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:41.667 10:34:10 version -- app/version.sh@14 -- # cut -f2 00:06:41.667 10:34:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.667 10:34:10 version -- app/version.sh@19 -- # patch=0 00:06:41.667 10:34:10 version -- app/version.sh@20 -- # get_header_version suffix 00:06:41.667 10:34:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:41.667 10:34:10 version -- app/version.sh@14 -- # cut -f2 00:06:41.667 10:34:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.667 10:34:10 version -- app/version.sh@20 -- # suffix=-pre 00:06:41.667 10:34:10 version -- app/version.sh@22 -- # version=24.9 00:06:41.667 10:34:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.667 10:34:10 version -- app/version.sh@28 -- # version=24.9rc0 00:06:41.667 10:34:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:06:41.667 10:34:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.971 10:34:10 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:41.971 10:34:10 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:41.971 00:06:41.971 real 0m0.147s 00:06:41.971 user 0m0.081s 00:06:41.971 sys 0m0.101s 00:06:41.971 10:34:10 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.971 10:34:10 version -- common/autotest_common.sh@10 -- # set +x 00:06:41.971 ************************************ 00:06:41.971 END TEST version 00:06:41.971 ************************************ 00:06:41.971 10:34:10 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:41.971 10:34:10 -- spdk/autotest.sh@198 -- # uname -s 00:06:41.971 10:34:10 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:41.971 10:34:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:41.971 10:34:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:41.971 10:34:10 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:41.971 10:34:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:41.971 10:34:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:41.971 10:34:10 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:41.971 10:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.971 10:34:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:41.971 10:34:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:41.971 10:34:10 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:41.971 10:34:10 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:41.971 10:34:10 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:06:41.971 10:34:10 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:41.971 10:34:10 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:41.971 10:34:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.971 10:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.971 ************************************ 00:06:41.971 START TEST nvmf_rdma 00:06:41.971 ************************************ 00:06:41.971 10:34:10 nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:41.971 * Looking for test storage... 00:06:41.971 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.971 10:34:10 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:41.971 10:34:10 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.971 10:34:10 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.972 10:34:10 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.972 10:34:10 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.972 10:34:10 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.972 10:34:10 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.972 10:34:10 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:06:41.972 10:34:10 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:41.972 10:34:10 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:41.972 10:34:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:41.972 10:34:10 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:41.972 10:34:10 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:41.972 10:34:10 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.972 10:34:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:41.972 ************************************ 00:06:41.972 START TEST nvmf_example 00:06:41.972 ************************************ 00:06:41.972 10:34:10 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:42.231 * Looking for test storage... 00:06:42.231 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.231 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.232 10:34:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:48.802 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:48.802 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@377 -- # modinfo irdma 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:48.802 Found net devices under 0000:af:00.0: cvl_0_0 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:48.802 Found net devices under 0000:af:00.1: cvl_0_1 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo cvl_0_0 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo cvl_0_1 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:06:48.802 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:48.802 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:06:48.802 altname enp175s0f0np0 00:06:48.802 altname ens801f0np0 00:06:48.802 inet 192.168.100.8/24 scope global cvl_0_0 00:06:48.802 valid_lft forever preferred_lft forever 00:06:48.802 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:06:48.802 valid_lft forever preferred_lft forever 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:48.802 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:06:48.803 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:48.803 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:06:48.803 altname enp175s0f1np1 00:06:48.803 altname ens801f1np1 00:06:48.803 inet 192.168.100.9/24 scope global cvl_0_1 00:06:48.803 valid_lft forever preferred_lft forever 00:06:48.803 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:06:48.803 valid_lft forever preferred_lft forever 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo cvl_0_0 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo cvl_0_1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:48.803 192.168.100.9' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:48.803 192.168.100.9' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:48.803 192.168.100.9' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3909640 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3909640 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 3909640 ']' 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:48.803 10:34:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:48.803 10:34:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:48.803 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.055 Initializing NVMe Controllers 00:07:01.055 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:01.055 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:01.055 Initialization complete. Launching workers. 00:07:01.055 ======================================================== 00:07:01.055 Latency(us) 00:07:01.055 Device Information : IOPS MiB/s Average min max 00:07:01.055 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24872.02 97.16 2573.00 515.14 14072.89 00:07:01.055 ======================================================== 00:07:01.055 Total : 24872.02 97.16 2573.00 515.14 14072.89 00:07:01.055 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:01.055 rmmod nvme_rdma 00:07:01.055 rmmod nvme_fabrics 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3909640 ']' 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3909640 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 3909640 ']' 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 3909640 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:01.055 10:34:28 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3909640 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3909640' 00:07:01.055 killing process with pid 3909640 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@968 -- # kill 3909640 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@973 -- # wait 3909640 00:07:01.055 nvmf threads initialize successfully 00:07:01.055 bdev subsystem init successfully 00:07:01.055 created a nvmf target service 00:07:01.055 create targets's poll groups done 00:07:01.055 all subsystems of target started 00:07:01.055 nvmf target is running 00:07:01.055 all subsystems of target stopped 00:07:01.055 destroy targets's poll groups done 00:07:01.055 destroyed the nvmf target service 00:07:01.055 bdev subsystem finish successfully 00:07:01.055 nvmf threads destroy successfully 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.055 00:07:01.055 real 0m18.275s 00:07:01.055 user 0m50.841s 00:07:01.055 sys 0m4.629s 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.055 10:34:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.055 ************************************ 00:07:01.055 END TEST nvmf_example 00:07:01.055 ************************************ 00:07:01.055 10:34:29 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:01.055 10:34:29 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:01.055 10:34:29 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.055 10:34:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:01.055 ************************************ 00:07:01.055 START TEST nvmf_filesystem 00:07:01.055 ************************************ 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:01.055 * Looking for test storage... 00:07:01.055 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output ']' 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:01.055 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/config.h ]] 00:07:01.056 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:01.056 #define SPDK_CONFIG_H 00:07:01.056 #define SPDK_CONFIG_APPS 1 00:07:01.056 #define SPDK_CONFIG_ARCH native 00:07:01.056 #undef SPDK_CONFIG_ASAN 00:07:01.056 #undef SPDK_CONFIG_AVAHI 00:07:01.056 #undef SPDK_CONFIG_CET 00:07:01.056 #define SPDK_CONFIG_COVERAGE 1 00:07:01.056 #define SPDK_CONFIG_CROSS_PREFIX 00:07:01.056 #undef SPDK_CONFIG_CRYPTO 00:07:01.056 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:01.056 #undef SPDK_CONFIG_CUSTOMOCF 00:07:01.056 #undef SPDK_CONFIG_DAOS 00:07:01.056 #define SPDK_CONFIG_DAOS_DIR 00:07:01.056 #define SPDK_CONFIG_DEBUG 1 00:07:01.056 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:01.056 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:07:01.056 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:01.056 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:01.056 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:01.056 #undef SPDK_CONFIG_DPDK_UADK 00:07:01.056 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:07:01.056 #define SPDK_CONFIG_EXAMPLES 1 00:07:01.056 #undef SPDK_CONFIG_FC 00:07:01.056 #define SPDK_CONFIG_FC_PATH 00:07:01.056 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:01.056 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:01.056 #undef SPDK_CONFIG_FUSE 00:07:01.056 #undef SPDK_CONFIG_FUZZER 00:07:01.056 #define SPDK_CONFIG_FUZZER_LIB 00:07:01.056 #undef SPDK_CONFIG_GOLANG 00:07:01.056 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:01.056 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:01.056 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:01.056 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:01.056 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:01.056 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:01.056 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:01.056 #define SPDK_CONFIG_IDXD 1 00:07:01.056 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:01.056 #undef SPDK_CONFIG_IPSEC_MB 00:07:01.056 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:01.056 #define SPDK_CONFIG_ISAL 1 00:07:01.056 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:01.056 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:01.056 #define SPDK_CONFIG_LIBDIR 00:07:01.056 #undef SPDK_CONFIG_LTO 00:07:01.056 #define SPDK_CONFIG_MAX_LCORES 00:07:01.056 #define SPDK_CONFIG_NVME_CUSE 1 00:07:01.056 #undef SPDK_CONFIG_OCF 00:07:01.056 #define SPDK_CONFIG_OCF_PATH 00:07:01.056 #define SPDK_CONFIG_OPENSSL_PATH 00:07:01.056 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:01.056 #define SPDK_CONFIG_PGO_DIR 00:07:01.056 #undef SPDK_CONFIG_PGO_USE 00:07:01.056 #define SPDK_CONFIG_PREFIX /usr/local 00:07:01.056 #undef SPDK_CONFIG_RAID5F 00:07:01.056 #undef SPDK_CONFIG_RBD 00:07:01.056 #define SPDK_CONFIG_RDMA 1 00:07:01.057 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:01.057 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:01.057 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:01.057 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:01.057 #define SPDK_CONFIG_SHARED 1 00:07:01.057 #undef SPDK_CONFIG_SMA 00:07:01.057 #define SPDK_CONFIG_TESTS 1 00:07:01.057 #undef SPDK_CONFIG_TSAN 00:07:01.057 #define SPDK_CONFIG_UBLK 1 00:07:01.057 #define SPDK_CONFIG_UBSAN 1 00:07:01.057 #undef SPDK_CONFIG_UNIT_TESTS 00:07:01.057 #undef SPDK_CONFIG_URING 00:07:01.057 #define SPDK_CONFIG_URING_PATH 00:07:01.057 #undef SPDK_CONFIG_URING_ZNS 00:07:01.057 #undef SPDK_CONFIG_USDT 00:07:01.057 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:01.057 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:01.057 #undef SPDK_CONFIG_VFIO_USER 00:07:01.057 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:01.057 #define SPDK_CONFIG_VHOST 1 00:07:01.057 #define SPDK_CONFIG_VIRTIO 1 00:07:01.057 #undef SPDK_CONFIG_VTUNE 00:07:01.057 #define SPDK_CONFIG_VTUNE_DIR 00:07:01.057 #define SPDK_CONFIG_WERROR 1 00:07:01.057 #define SPDK_CONFIG_WPDK_DIR 00:07:01.057 #undef SPDK_CONFIG_XNVME 00:07:01.057 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.run_test_name 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power ]] 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:01.057 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:01.058 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3911772 ]] 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3911772 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.evRWs0 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target /tmp/spdk.evRWs0/tests/target /tmp/spdk.evRWs0 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=900243456 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4384186368 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=89597284352 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=95562715136 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5965430784 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47777980416 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47781355520 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19102957568 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19112546304 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9588736 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47780855808 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47781359616 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=503808 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9556267008 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9556271104 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:01.059 * Looking for test storage... 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:01.059 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=89597284352 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8180023296 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:01.060 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.060 10:34:29 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:07.633 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:07.633 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@377 -- # modinfo irdma 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:07.633 Found net devices under 0000:af:00.0: cvl_0_0 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:07.633 Found net devices under 0000:af:00.1: cvl_0_1 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:07.633 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:07:07.634 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:07.634 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:07.634 altname enp175s0f0np0 00:07:07.634 altname ens801f0np0 00:07:07.634 inet 192.168.100.8/24 scope global cvl_0_0 00:07:07.634 valid_lft forever preferred_lft forever 00:07:07.634 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:07.634 valid_lft forever preferred_lft forever 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:07:07.634 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:07.634 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:07.634 altname enp175s0f1np1 00:07:07.634 altname ens801f1np1 00:07:07.634 inet 192.168.100.9/24 scope global cvl_0_1 00:07:07.634 valid_lft forever preferred_lft forever 00:07:07.634 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:07.634 valid_lft forever preferred_lft forever 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:07.634 192.168.100.9' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:07.634 192.168.100.9' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:07.634 192.168.100.9' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.634 ************************************ 00:07:07.634 START TEST nvmf_filesystem_no_in_capsule 00:07:07.634 ************************************ 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3915297 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3915297 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 3915297 ']' 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:07.634 10:34:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.634 [2024-06-10 10:34:35.725493] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:07.634 [2024-06-10 10:34:35.725537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.634 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.634 [2024-06-10 10:34:35.785807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.634 [2024-06-10 10:34:35.862260] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.634 [2024-06-10 10:34:35.862298] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.634 [2024-06-10 10:34:35.862305] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.634 [2024-06-10 10:34:35.862311] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.634 [2024-06-10 10:34:35.862315] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.634 [2024-06-10 10:34:35.862380] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.634 [2024-06-10 10:34:35.862474] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.634 [2024-06-10 10:34:35.862541] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.635 [2024-06-10 10:34:35.862542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 [2024-06-10 10:34:36.577965] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:07.635 [2024-06-10 10:34:36.591768] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1d958f0/0x1d94f30) succeed. 00:07:07.635 [2024-06-10 10:34:36.600649] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1d96ca0/0x1d954b0) succeed. 00:07:07.635 [2024-06-10 10:34:36.600671] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.635 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.894 Malloc1 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.894 [2024-06-10 10:34:36.748097] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:07.894 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:07.895 { 00:07:07.895 "name": "Malloc1", 00:07:07.895 "aliases": [ 00:07:07.895 "92d7d0f0-20fc-4470-a8d1-fa78108aaf11" 00:07:07.895 ], 00:07:07.895 "product_name": "Malloc disk", 00:07:07.895 "block_size": 512, 00:07:07.895 "num_blocks": 1048576, 00:07:07.895 "uuid": "92d7d0f0-20fc-4470-a8d1-fa78108aaf11", 00:07:07.895 "assigned_rate_limits": { 00:07:07.895 "rw_ios_per_sec": 0, 00:07:07.895 "rw_mbytes_per_sec": 0, 00:07:07.895 "r_mbytes_per_sec": 0, 00:07:07.895 "w_mbytes_per_sec": 0 00:07:07.895 }, 00:07:07.895 "claimed": true, 00:07:07.895 "claim_type": "exclusive_write", 00:07:07.895 "zoned": false, 00:07:07.895 "supported_io_types": { 00:07:07.895 "read": true, 00:07:07.895 "write": true, 00:07:07.895 "unmap": true, 00:07:07.895 "write_zeroes": true, 00:07:07.895 "flush": true, 00:07:07.895 "reset": true, 00:07:07.895 "compare": false, 00:07:07.895 "compare_and_write": false, 00:07:07.895 "abort": true, 00:07:07.895 "nvme_admin": false, 00:07:07.895 "nvme_io": false 00:07:07.895 }, 00:07:07.895 "memory_domains": [ 00:07:07.895 { 00:07:07.895 "dma_device_id": "system", 00:07:07.895 "dma_device_type": 1 00:07:07.895 }, 00:07:07.895 { 00:07:07.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.895 "dma_device_type": 2 00:07:07.895 } 00:07:07.895 ], 00:07:07.895 "driver_specific": {} 00:07:07.895 } 00:07:07.895 ]' 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:07.895 10:34:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:08.153 10:34:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.153 10:34:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:08.153 10:34:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.153 10:34:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:08.153 10:34:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:10.058 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:10.058 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:10.058 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:10.317 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:10.576 10:34:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:11.513 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:11.513 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:11.513 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:11.513 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.513 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.513 ************************************ 00:07:11.513 START TEST filesystem_ext4 00:07:11.513 ************************************ 00:07:11.513 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:11.513 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:11.514 mke2fs 1.46.5 (30-Dec-2021) 00:07:11.514 Discarding device blocks: 0/522240 done 00:07:11.514 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:11.514 Filesystem UUID: 368703c3-afb7-43db-9288-a7d705323733 00:07:11.514 Superblock backups stored on blocks: 00:07:11.514 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:11.514 00:07:11.514 Allocating group tables: 0/64 done 00:07:11.514 Writing inode tables: 0/64 done 00:07:11.514 Creating journal (8192 blocks): done 00:07:11.514 Writing superblocks and filesystem accounting information: 0/64 done 00:07:11.514 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:11.514 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3915297 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.772 00:07:11.772 real 0m0.175s 00:07:11.772 user 0m0.020s 00:07:11.772 sys 0m0.066s 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:11.772 ************************************ 00:07:11.772 END TEST filesystem_ext4 00:07:11.772 ************************************ 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.772 ************************************ 00:07:11.772 START TEST filesystem_btrfs 00:07:11.772 ************************************ 00:07:11.772 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:11.773 btrfs-progs v6.6.2 00:07:11.773 See https://btrfs.readthedocs.io for more information. 00:07:11.773 00:07:11.773 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:11.773 NOTE: several default settings have changed in version 5.15, please make sure 00:07:11.773 this does not affect your deployments: 00:07:11.773 - DUP for metadata (-m dup) 00:07:11.773 - enabled no-holes (-O no-holes) 00:07:11.773 - enabled free-space-tree (-R free-space-tree) 00:07:11.773 00:07:11.773 Label: (null) 00:07:11.773 UUID: dc84ce23-adc0-4a7c-8ab5-b85cece84937 00:07:11.773 Node size: 16384 00:07:11.773 Sector size: 4096 00:07:11.773 Filesystem size: 510.00MiB 00:07:11.773 Block group profiles: 00:07:11.773 Data: single 8.00MiB 00:07:11.773 Metadata: DUP 32.00MiB 00:07:11.773 System: DUP 8.00MiB 00:07:11.773 SSD detected: yes 00:07:11.773 Zoned device: no 00:07:11.773 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:11.773 Runtime features: free-space-tree 00:07:11.773 Checksum: crc32c 00:07:11.773 Number of devices: 1 00:07:11.773 Devices: 00:07:11.773 ID SIZE PATH 00:07:11.773 1 510.00MiB /dev/nvme0n1p1 00:07:11.773 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:11.773 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3915297 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.032 00:07:12.032 real 0m0.240s 00:07:12.032 user 0m0.038s 00:07:12.032 sys 0m0.109s 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:12.032 ************************************ 00:07:12.032 END TEST filesystem_btrfs 00:07:12.032 ************************************ 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.032 ************************************ 00:07:12.032 START TEST filesystem_xfs 00:07:12.032 ************************************ 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:12.032 10:34:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:12.032 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:12.032 = sectsz=512 attr=2, projid32bit=1 00:07:12.032 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:12.032 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:12.032 data = bsize=4096 blocks=130560, imaxpct=25 00:07:12.032 = sunit=0 swidth=0 blks 00:07:12.032 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:12.032 log =internal log bsize=4096 blocks=16384, version=2 00:07:12.032 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:12.032 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:12.292 Discarding blocks...Done. 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3915297 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.292 00:07:12.292 real 0m0.198s 00:07:12.292 user 0m0.018s 00:07:12.292 sys 0m0.074s 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:12.292 ************************************ 00:07:12.292 END TEST filesystem_xfs 00:07:12.292 ************************************ 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:12.292 10:34:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3915297 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 3915297 ']' 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 3915297 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3915297 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3915297' 00:07:13.231 killing process with pid 3915297 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 3915297 00:07:13.231 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 3915297 00:07:13.490 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:13.490 00:07:13.490 real 0m6.830s 00:07:13.490 user 0m26.711s 00:07:13.490 sys 0m1.029s 00:07:13.490 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.490 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.490 ************************************ 00:07:13.490 END TEST nvmf_filesystem_no_in_capsule 00:07:13.490 ************************************ 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.749 ************************************ 00:07:13.749 START TEST nvmf_filesystem_in_capsule 00:07:13.749 ************************************ 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3916656 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3916656 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 3916656 ']' 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:13.749 10:34:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.749 [2024-06-10 10:34:42.627051] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:13.749 [2024-06-10 10:34:42.627090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.749 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.749 [2024-06-10 10:34:42.689228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.008 [2024-06-10 10:34:42.779547] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.008 [2024-06-10 10:34:42.779593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.008 [2024-06-10 10:34:42.779602] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.008 [2024-06-10 10:34:42.779610] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.008 [2024-06-10 10:34:42.779616] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.008 [2024-06-10 10:34:42.779659] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.008 [2024-06-10 10:34:42.779760] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.008 [2024-06-10 10:34:42.779844] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.008 [2024-06-10 10:34:42.779847] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.576 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:14.576 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:14.576 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:14.576 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 [2024-06-10 10:34:43.500548] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xce68f0/0xce5f30) succeed. 00:07:14.577 [2024-06-10 10:34:43.509410] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xce7ca0/0xce64b0) succeed. 00:07:14.577 [2024-06-10 10:34:43.509432] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.577 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.836 Malloc1 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.836 [2024-06-10 10:34:43.668582] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:14.836 { 00:07:14.836 "name": "Malloc1", 00:07:14.836 "aliases": [ 00:07:14.836 "a5308f1b-4128-4469-8685-e1aa43177375" 00:07:14.836 ], 00:07:14.836 "product_name": "Malloc disk", 00:07:14.836 "block_size": 512, 00:07:14.836 "num_blocks": 1048576, 00:07:14.836 "uuid": "a5308f1b-4128-4469-8685-e1aa43177375", 00:07:14.836 "assigned_rate_limits": { 00:07:14.836 "rw_ios_per_sec": 0, 00:07:14.836 "rw_mbytes_per_sec": 0, 00:07:14.836 "r_mbytes_per_sec": 0, 00:07:14.836 "w_mbytes_per_sec": 0 00:07:14.836 }, 00:07:14.836 "claimed": true, 00:07:14.836 "claim_type": "exclusive_write", 00:07:14.836 "zoned": false, 00:07:14.836 "supported_io_types": { 00:07:14.836 "read": true, 00:07:14.836 "write": true, 00:07:14.836 "unmap": true, 00:07:14.836 "write_zeroes": true, 00:07:14.836 "flush": true, 00:07:14.836 "reset": true, 00:07:14.836 "compare": false, 00:07:14.836 "compare_and_write": false, 00:07:14.836 "abort": true, 00:07:14.836 "nvme_admin": false, 00:07:14.836 "nvme_io": false 00:07:14.836 }, 00:07:14.836 "memory_domains": [ 00:07:14.836 { 00:07:14.836 "dma_device_id": "system", 00:07:14.836 "dma_device_type": 1 00:07:14.836 }, 00:07:14.836 { 00:07:14.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.836 "dma_device_type": 2 00:07:14.836 } 00:07:14.836 ], 00:07:14.836 "driver_specific": {} 00:07:14.836 } 00:07:14.836 ]' 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:14.836 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:15.094 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.094 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:15.094 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.094 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:15.094 10:34:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:16.998 10:34:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:16.998 10:34:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:16.998 10:34:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:16.998 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:17.257 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:17.257 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:17.257 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:17.257 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:17.257 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:17.515 10:34:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.451 ************************************ 00:07:18.451 START TEST filesystem_in_capsule_ext4 00:07:18.451 ************************************ 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:18.451 mke2fs 1.46.5 (30-Dec-2021) 00:07:18.451 Discarding device blocks: 0/522240 done 00:07:18.451 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:18.451 Filesystem UUID: 59035d6e-7c73-48ac-a562-b58ddf18776a 00:07:18.451 Superblock backups stored on blocks: 00:07:18.451 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:18.451 00:07:18.451 Allocating group tables: 0/64 done 00:07:18.451 Writing inode tables: 0/64 done 00:07:18.451 Creating journal (8192 blocks): done 00:07:18.451 Writing superblocks and filesystem accounting information: 0/64 done 00:07:18.451 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:18.451 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.710 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3916656 00:07:18.710 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.710 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.710 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.710 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.710 00:07:18.710 real 0m0.174s 00:07:18.710 user 0m0.023s 00:07:18.710 sys 0m0.062s 00:07:18.710 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:18.710 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:18.710 ************************************ 00:07:18.711 END TEST filesystem_in_capsule_ext4 00:07:18.711 ************************************ 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.711 ************************************ 00:07:18.711 START TEST filesystem_in_capsule_btrfs 00:07:18.711 ************************************ 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:18.711 btrfs-progs v6.6.2 00:07:18.711 See https://btrfs.readthedocs.io for more information. 00:07:18.711 00:07:18.711 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:18.711 NOTE: several default settings have changed in version 5.15, please make sure 00:07:18.711 this does not affect your deployments: 00:07:18.711 - DUP for metadata (-m dup) 00:07:18.711 - enabled no-holes (-O no-holes) 00:07:18.711 - enabled free-space-tree (-R free-space-tree) 00:07:18.711 00:07:18.711 Label: (null) 00:07:18.711 UUID: bba82184-93d5-4d8a-86b9-9575b69c454a 00:07:18.711 Node size: 16384 00:07:18.711 Sector size: 4096 00:07:18.711 Filesystem size: 510.00MiB 00:07:18.711 Block group profiles: 00:07:18.711 Data: single 8.00MiB 00:07:18.711 Metadata: DUP 32.00MiB 00:07:18.711 System: DUP 8.00MiB 00:07:18.711 SSD detected: yes 00:07:18.711 Zoned device: no 00:07:18.711 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:18.711 Runtime features: free-space-tree 00:07:18.711 Checksum: crc32c 00:07:18.711 Number of devices: 1 00:07:18.711 Devices: 00:07:18.711 ID SIZE PATH 00:07:18.711 1 510.00MiB /dev/nvme0n1p1 00:07:18.711 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:18.711 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3916656 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.970 00:07:18.970 real 0m0.241s 00:07:18.970 user 0m0.025s 00:07:18.970 sys 0m0.122s 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:18.970 ************************************ 00:07:18.970 END TEST filesystem_in_capsule_btrfs 00:07:18.970 ************************************ 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.970 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.970 ************************************ 00:07:18.971 START TEST filesystem_in_capsule_xfs 00:07:18.971 ************************************ 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:18.971 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:18.971 = sectsz=512 attr=2, projid32bit=1 00:07:18.971 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:18.971 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:18.971 data = bsize=4096 blocks=130560, imaxpct=25 00:07:18.971 = sunit=0 swidth=0 blks 00:07:18.971 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:18.971 log =internal log bsize=4096 blocks=16384, version=2 00:07:18.971 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:18.971 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:18.971 Discarding blocks...Done. 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:18.971 10:34:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.230 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.230 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:19.230 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.230 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:19.230 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:19.230 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3916656 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.231 00:07:19.231 real 0m0.200s 00:07:19.231 user 0m0.023s 00:07:19.231 sys 0m0.068s 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:19.231 ************************************ 00:07:19.231 END TEST filesystem_in_capsule_xfs 00:07:19.231 ************************************ 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:19.231 10:34:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3916656 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 3916656 ']' 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 3916656 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3916656 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3916656' 00:07:20.200 killing process with pid 3916656 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 3916656 00:07:20.200 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 3916656 00:07:20.464 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:20.464 00:07:20.464 real 0m6.885s 00:07:20.464 user 0m26.833s 00:07:20.464 sys 0m1.078s 00:07:20.464 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:20.464 10:34:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.464 ************************************ 00:07:20.464 END TEST nvmf_filesystem_in_capsule 00:07:20.464 ************************************ 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:20.724 rmmod nvme_rdma 00:07:20.724 rmmod nvme_fabrics 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:20.724 00:07:20.724 real 0m20.210s 00:07:20.724 user 0m55.529s 00:07:20.724 sys 0m6.771s 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:20.724 10:34:49 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.724 ************************************ 00:07:20.724 END TEST nvmf_filesystem 00:07:20.724 ************************************ 00:07:20.724 10:34:49 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:20.724 10:34:49 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:20.724 10:34:49 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:20.724 10:34:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:20.724 ************************************ 00:07:20.724 START TEST nvmf_target_discovery 00:07:20.724 ************************************ 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:20.724 * Looking for test storage... 00:07:20.724 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.724 10:34:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:20.725 10:34:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.301 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.301 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.301 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.301 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.301 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.301 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:27.302 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:27.302 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@377 -- # modinfo irdma 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:27.302 Found net devices under 0000:af:00.0: cvl_0_0 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:27.302 Found net devices under 0000:af:00.1: cvl_0_1 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:27.302 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:07:27.302 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:27.302 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:27.302 altname enp175s0f0np0 00:07:27.302 altname ens801f0np0 00:07:27.302 inet 192.168.100.8/24 scope global cvl_0_0 00:07:27.302 valid_lft forever preferred_lft forever 00:07:27.302 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:27.303 valid_lft forever preferred_lft forever 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:07:27.303 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:27.303 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:27.303 altname enp175s0f1np1 00:07:27.303 altname ens801f1np1 00:07:27.303 inet 192.168.100.9/24 scope global cvl_0_1 00:07:27.303 valid_lft forever preferred_lft forever 00:07:27.303 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:27.303 valid_lft forever preferred_lft forever 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:27.303 192.168.100.9' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:27.303 192.168.100.9' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:27.303 192.168.100.9' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3921359 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3921359 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 3921359 ']' 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:27.303 10:34:55 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.303 [2024-06-10 10:34:55.703080] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:27.303 [2024-06-10 10:34:55.703128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.303 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.303 [2024-06-10 10:34:55.764344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.303 [2024-06-10 10:34:55.839774] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.303 [2024-06-10 10:34:55.839813] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.303 [2024-06-10 10:34:55.839820] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.303 [2024-06-10 10:34:55.839827] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.303 [2024-06-10 10:34:55.839832] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.303 [2024-06-10 10:34:55.839889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.303 [2024-06-10 10:34:55.840012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.303 [2024-06-10 10:34:55.840035] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.303 [2024-06-10 10:34:55.840037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.563 [2024-06-10 10:34:56.569683] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x8d28f0/0x8d1f30) succeed. 00:07:27.563 [2024-06-10 10:34:56.578599] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x8d3ca0/0x8d24b0) succeed. 00:07:27.563 [2024-06-10 10:34:56.578621] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.563 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.822 Null1 00:07:27.822 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.822 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.822 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.822 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.822 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.822 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:27.822 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 [2024-06-10 10:34:56.626938] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 Null2 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 Null3 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 Null4 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.823 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:07:28.083 00:07:28.083 Discovery Log Number of Records 6, Generation counter 6 00:07:28.083 =====Discovery Log Entry 0====== 00:07:28.083 trtype: rdma 00:07:28.083 adrfam: ipv4 00:07:28.083 subtype: current discovery subsystem 00:07:28.083 treq: not required 00:07:28.083 portid: 0 00:07:28.083 trsvcid: 4420 00:07:28.083 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:28.083 traddr: 192.168.100.8 00:07:28.083 eflags: explicit discovery connections, duplicate discovery information 00:07:28.083 rdma_prtype: not specified 00:07:28.083 rdma_qptype: connected 00:07:28.083 rdma_cms: rdma-cm 00:07:28.083 rdma_pkey: 0x0000 00:07:28.083 =====Discovery Log Entry 1====== 00:07:28.083 trtype: rdma 00:07:28.083 adrfam: ipv4 00:07:28.083 subtype: nvme subsystem 00:07:28.083 treq: not required 00:07:28.083 portid: 0 00:07:28.083 trsvcid: 4420 00:07:28.083 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:28.083 traddr: 192.168.100.8 00:07:28.083 eflags: none 00:07:28.083 rdma_prtype: not specified 00:07:28.083 rdma_qptype: connected 00:07:28.083 rdma_cms: rdma-cm 00:07:28.083 rdma_pkey: 0x0000 00:07:28.083 =====Discovery Log Entry 2====== 00:07:28.083 trtype: rdma 00:07:28.083 adrfam: ipv4 00:07:28.083 subtype: nvme subsystem 00:07:28.083 treq: not required 00:07:28.083 portid: 0 00:07:28.083 trsvcid: 4420 00:07:28.083 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:28.083 traddr: 192.168.100.8 00:07:28.083 eflags: none 00:07:28.083 rdma_prtype: not specified 00:07:28.083 rdma_qptype: connected 00:07:28.083 rdma_cms: rdma-cm 00:07:28.083 rdma_pkey: 0x0000 00:07:28.083 =====Discovery Log Entry 3====== 00:07:28.083 trtype: rdma 00:07:28.083 adrfam: ipv4 00:07:28.083 subtype: nvme subsystem 00:07:28.083 treq: not required 00:07:28.083 portid: 0 00:07:28.083 trsvcid: 4420 00:07:28.083 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:28.083 traddr: 192.168.100.8 00:07:28.083 eflags: none 00:07:28.083 rdma_prtype: not specified 00:07:28.083 rdma_qptype: connected 00:07:28.083 rdma_cms: rdma-cm 00:07:28.083 rdma_pkey: 0x0000 00:07:28.083 =====Discovery Log Entry 4====== 00:07:28.083 trtype: rdma 00:07:28.083 adrfam: ipv4 00:07:28.083 subtype: nvme subsystem 00:07:28.083 treq: not required 00:07:28.083 portid: 0 00:07:28.083 trsvcid: 4420 00:07:28.083 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:28.083 traddr: 192.168.100.8 00:07:28.083 eflags: none 00:07:28.083 rdma_prtype: not specified 00:07:28.083 rdma_qptype: connected 00:07:28.083 rdma_cms: rdma-cm 00:07:28.083 rdma_pkey: 0x0000 00:07:28.083 =====Discovery Log Entry 5====== 00:07:28.083 trtype: rdma 00:07:28.083 adrfam: ipv4 00:07:28.083 subtype: discovery subsystem referral 00:07:28.083 treq: not required 00:07:28.083 portid: 0 00:07:28.083 trsvcid: 4430 00:07:28.083 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:28.083 traddr: 192.168.100.8 00:07:28.083 eflags: none 00:07:28.083 rdma_prtype: unrecognized 00:07:28.083 rdma_qptype: unrecognized 00:07:28.083 rdma_cms: unrecognized 00:07:28.083 rdma_pkey: 0x0000 00:07:28.083 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:28.083 Perform nvmf subsystem discovery via RPC 00:07:28.083 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:28.083 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.083 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.083 [ 00:07:28.083 { 00:07:28.083 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:28.083 "subtype": "Discovery", 00:07:28.083 "listen_addresses": [ 00:07:28.083 { 00:07:28.083 "trtype": "RDMA", 00:07:28.083 "adrfam": "IPv4", 00:07:28.083 "traddr": "192.168.100.8", 00:07:28.083 "trsvcid": "4420" 00:07:28.083 } 00:07:28.083 ], 00:07:28.083 "allow_any_host": true, 00:07:28.083 "hosts": [] 00:07:28.083 }, 00:07:28.083 { 00:07:28.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.083 "subtype": "NVMe", 00:07:28.083 "listen_addresses": [ 00:07:28.083 { 00:07:28.083 "trtype": "RDMA", 00:07:28.083 "adrfam": "IPv4", 00:07:28.083 "traddr": "192.168.100.8", 00:07:28.083 "trsvcid": "4420" 00:07:28.083 } 00:07:28.083 ], 00:07:28.083 "allow_any_host": true, 00:07:28.083 "hosts": [], 00:07:28.083 "serial_number": "SPDK00000000000001", 00:07:28.083 "model_number": "SPDK bdev Controller", 00:07:28.083 "max_namespaces": 32, 00:07:28.083 "min_cntlid": 1, 00:07:28.083 "max_cntlid": 65519, 00:07:28.083 "namespaces": [ 00:07:28.083 { 00:07:28.083 "nsid": 1, 00:07:28.083 "bdev_name": "Null1", 00:07:28.083 "name": "Null1", 00:07:28.083 "nguid": "BC2CBB3FD29F4B8897ED46777C91AF53", 00:07:28.083 "uuid": "bc2cbb3f-d29f-4b88-97ed-46777c91af53" 00:07:28.083 } 00:07:28.083 ] 00:07:28.083 }, 00:07:28.083 { 00:07:28.083 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:28.083 "subtype": "NVMe", 00:07:28.083 "listen_addresses": [ 00:07:28.083 { 00:07:28.083 "trtype": "RDMA", 00:07:28.083 "adrfam": "IPv4", 00:07:28.083 "traddr": "192.168.100.8", 00:07:28.083 "trsvcid": "4420" 00:07:28.083 } 00:07:28.083 ], 00:07:28.083 "allow_any_host": true, 00:07:28.083 "hosts": [], 00:07:28.083 "serial_number": "SPDK00000000000002", 00:07:28.083 "model_number": "SPDK bdev Controller", 00:07:28.083 "max_namespaces": 32, 00:07:28.083 "min_cntlid": 1, 00:07:28.083 "max_cntlid": 65519, 00:07:28.083 "namespaces": [ 00:07:28.083 { 00:07:28.083 "nsid": 1, 00:07:28.083 "bdev_name": "Null2", 00:07:28.084 "name": "Null2", 00:07:28.084 "nguid": "1AA2132C61C5458BA9D0665DF5C7FB9D", 00:07:28.084 "uuid": "1aa2132c-61c5-458b-a9d0-665df5c7fb9d" 00:07:28.084 } 00:07:28.084 ] 00:07:28.084 }, 00:07:28.084 { 00:07:28.084 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:28.084 "subtype": "NVMe", 00:07:28.084 "listen_addresses": [ 00:07:28.084 { 00:07:28.084 "trtype": "RDMA", 00:07:28.084 "adrfam": "IPv4", 00:07:28.084 "traddr": "192.168.100.8", 00:07:28.084 "trsvcid": "4420" 00:07:28.084 } 00:07:28.084 ], 00:07:28.084 "allow_any_host": true, 00:07:28.084 "hosts": [], 00:07:28.084 "serial_number": "SPDK00000000000003", 00:07:28.084 "model_number": "SPDK bdev Controller", 00:07:28.084 "max_namespaces": 32, 00:07:28.084 "min_cntlid": 1, 00:07:28.084 "max_cntlid": 65519, 00:07:28.084 "namespaces": [ 00:07:28.084 { 00:07:28.084 "nsid": 1, 00:07:28.084 "bdev_name": "Null3", 00:07:28.084 "name": "Null3", 00:07:28.084 "nguid": "6D364E4F134E4248B9B0789F66320EAF", 00:07:28.084 "uuid": "6d364e4f-134e-4248-b9b0-789f66320eaf" 00:07:28.084 } 00:07:28.084 ] 00:07:28.084 }, 00:07:28.084 { 00:07:28.084 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:28.084 "subtype": "NVMe", 00:07:28.084 "listen_addresses": [ 00:07:28.084 { 00:07:28.084 "trtype": "RDMA", 00:07:28.084 "adrfam": "IPv4", 00:07:28.084 "traddr": "192.168.100.8", 00:07:28.084 "trsvcid": "4420" 00:07:28.084 } 00:07:28.084 ], 00:07:28.084 "allow_any_host": true, 00:07:28.084 "hosts": [], 00:07:28.084 "serial_number": "SPDK00000000000004", 00:07:28.084 "model_number": "SPDK bdev Controller", 00:07:28.084 "max_namespaces": 32, 00:07:28.084 "min_cntlid": 1, 00:07:28.084 "max_cntlid": 65519, 00:07:28.084 "namespaces": [ 00:07:28.084 { 00:07:28.084 "nsid": 1, 00:07:28.084 "bdev_name": "Null4", 00:07:28.084 "name": "Null4", 00:07:28.084 "nguid": "2385B4438CCF49BCB362A58D677A3B69", 00:07:28.084 "uuid": "2385b443-8ccf-49bc-b362-a58d677a3b69" 00:07:28.084 } 00:07:28.084 ] 00:07:28.084 } 00:07:28.084 ] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.084 10:34:56 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:28.084 rmmod nvme_rdma 00:07:28.084 rmmod nvme_fabrics 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3921359 ']' 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3921359 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 3921359 ']' 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 3921359 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3921359 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:28.084 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3921359' 00:07:28.084 killing process with pid 3921359 00:07:28.085 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 3921359 00:07:28.085 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 3921359 00:07:28.344 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.344 10:34:57 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:28.344 00:07:28.344 real 0m7.698s 00:07:28.344 user 0m7.788s 00:07:28.344 sys 0m4.766s 00:07:28.344 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.344 10:34:57 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:28.344 ************************************ 00:07:28.344 END TEST nvmf_target_discovery 00:07:28.344 ************************************ 00:07:28.344 10:34:57 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:28.344 10:34:57 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:28.344 10:34:57 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.344 10:34:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:28.603 ************************************ 00:07:28.603 START TEST nvmf_referrals 00:07:28.603 ************************************ 00:07:28.603 10:34:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:28.603 * Looking for test storage... 00:07:28.603 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:28.604 10:34:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.878 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:33.879 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:33.879 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@377 -- # modinfo irdma 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:33.879 Found net devices under 0000:af:00.0: cvl_0_0 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:33.879 Found net devices under 0000:af:00.1: cvl_0_1 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:07:33.879 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:33.879 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:33.879 altname enp175s0f0np0 00:07:33.879 altname ens801f0np0 00:07:33.879 inet 192.168.100.8/24 scope global cvl_0_0 00:07:33.879 valid_lft forever preferred_lft forever 00:07:33.879 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:33.879 valid_lft forever preferred_lft forever 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:07:33.879 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:33.879 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:33.879 altname enp175s0f1np1 00:07:33.879 altname ens801f1np1 00:07:33.879 inet 192.168.100.9/24 scope global cvl_0_1 00:07:33.879 valid_lft forever preferred_lft forever 00:07:33.879 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:33.879 valid_lft forever preferred_lft forever 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:33.879 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:33.880 192.168.100.9' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:33.880 192.168.100.9' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:33.880 192.168.100.9' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3925114 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3925114 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 3925114 ']' 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:33.880 10:35:02 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.880 [2024-06-10 10:35:02.896181] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:33.880 [2024-06-10 10:35:02.896225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.139 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.139 [2024-06-10 10:35:02.955827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.139 [2024-06-10 10:35:03.034302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.139 [2024-06-10 10:35:03.034338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.139 [2024-06-10 10:35:03.034345] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.139 [2024-06-10 10:35:03.034351] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.139 [2024-06-10 10:35:03.034356] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.139 [2024-06-10 10:35:03.034393] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.139 [2024-06-10 10:35:03.034410] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.139 [2024-06-10 10:35:03.034500] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.139 [2024-06-10 10:35:03.034501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.077 [2024-06-10 10:35:03.831431] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x55b8f0/0x55af30) succeed. 00:07:35.077 [2024-06-10 10:35:03.840398] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x55cca0/0x55b4b0) succeed. 00:07:35.077 [2024-06-10 10:35:03.840420] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.077 [2024-06-10 10:35:03.852623] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:35.077 10:35:03 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:35.077 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:35.077 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:35.077 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:07:35.077 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.077 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:35.336 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:35.337 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:35.596 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:35.855 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:36.115 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:36.115 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:36.115 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:36.115 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:36.115 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:36.115 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:36.115 10:35:04 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:36.115 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:36.115 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:36.115 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:36.115 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:36.115 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:36.115 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.374 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:36.374 rmmod nvme_rdma 00:07:36.374 rmmod nvme_fabrics 00:07:36.632 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.632 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:36.632 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:36.632 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3925114 ']' 00:07:36.632 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3925114 00:07:36.632 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 3925114 ']' 00:07:36.632 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 3925114 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3925114 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3925114' 00:07:36.633 killing process with pid 3925114 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 3925114 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 3925114 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:36.633 00:07:36.633 real 0m8.282s 00:07:36.633 user 0m12.830s 00:07:36.633 sys 0m4.728s 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.633 10:35:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:36.892 ************************************ 00:07:36.892 END TEST nvmf_referrals 00:07:36.892 ************************************ 00:07:36.892 10:35:05 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:36.892 10:35:05 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:36.892 10:35:05 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.892 10:35:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:36.892 ************************************ 00:07:36.892 START TEST nvmf_connect_disconnect 00:07:36.892 ************************************ 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:36.893 * Looking for test storage... 00:07:36.893 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.893 10:35:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:43.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:43.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # modinfo irdma 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:43.464 Found net devices under 0000:af:00.0: cvl_0_0 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:43.464 Found net devices under 0000:af:00.1: cvl_0_1 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.464 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.465 10:35:11 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:07:43.465 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:43.465 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:43.465 altname enp175s0f0np0 00:07:43.465 altname ens801f0np0 00:07:43.465 inet 192.168.100.8/24 scope global cvl_0_0 00:07:43.465 valid_lft forever preferred_lft forever 00:07:43.465 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:43.465 valid_lft forever preferred_lft forever 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:07:43.465 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:43.465 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:43.465 altname enp175s0f1np1 00:07:43.465 altname ens801f1np1 00:07:43.465 inet 192.168.100.9/24 scope global cvl_0_1 00:07:43.465 valid_lft forever preferred_lft forever 00:07:43.465 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:43.465 valid_lft forever preferred_lft forever 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:43.465 192.168.100.9' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:43.465 192.168.100.9' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:43.465 192.168.100.9' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3929122 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3929122 00:07:43.465 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.466 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 3929122 ']' 00:07:43.466 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.466 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:43.466 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.466 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:43.466 10:35:12 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:43.466 [2024-06-10 10:35:12.255007] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:43.466 [2024-06-10 10:35:12.255059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.466 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.466 [2024-06-10 10:35:12.315142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.466 [2024-06-10 10:35:12.394336] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.466 [2024-06-10 10:35:12.394370] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.466 [2024-06-10 10:35:12.394377] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.466 [2024-06-10 10:35:12.394383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.466 [2024-06-10 10:35:12.394388] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.466 [2024-06-10 10:35:12.394437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.466 [2024-06-10 10:35:12.394531] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.466 [2024-06-10 10:35:12.394617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.466 [2024-06-10 10:35:12.394618] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.400 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:44.400 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:07:44.400 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.400 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:44.401 [2024-06-10 10:35:13.111930] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:44.401 [2024-06-10 10:35:13.125126] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x19d68f0/0x19d5f30) succeed. 00:07:44.401 [2024-06-10 10:35:13.133943] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x19d7ca0/0x19d64b0) succeed. 00:07:44.401 [2024-06-10 10:35:13.133967] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:44.401 [2024-06-10 10:35:13.188936] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:44.401 10:35:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:46.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:15.817 rmmod nvme_rdma 00:12:15.817 rmmod nvme_fabrics 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3929122 ']' 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3929122 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 3929122 ']' 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 3929122 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3929122 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3929122' 00:12:15.817 killing process with pid 3929122 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 3929122 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 3929122 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:15.817 00:12:15.817 real 4m39.023s 00:12:15.817 user 18m8.240s 00:12:15.817 sys 0m16.451s 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:15.817 10:39:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:15.817 ************************************ 00:12:15.817 END TEST nvmf_connect_disconnect 00:12:15.817 ************************************ 00:12:15.817 10:39:44 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:15.817 10:39:44 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:15.817 10:39:44 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:15.817 10:39:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:15.817 ************************************ 00:12:15.817 START TEST nvmf_multitarget 00:12:15.817 ************************************ 00:12:15.817 10:39:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:16.077 * Looking for test storage... 00:12:16.077 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.077 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:16.078 10:39:44 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:22.649 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:22.649 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@377 -- # modinfo irdma 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:22.649 Found net devices under 0000:af:00.0: cvl_0_0 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:22.649 Found net devices under 0000:af:00.1: cvl_0_1 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:22.649 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:12:22.650 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:22.650 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:22.650 altname enp175s0f0np0 00:12:22.650 altname ens801f0np0 00:12:22.650 inet 192.168.100.8/24 scope global cvl_0_0 00:12:22.650 valid_lft forever preferred_lft forever 00:12:22.650 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:22.650 valid_lft forever preferred_lft forever 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:12:22.650 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:22.650 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:22.650 altname enp175s0f1np1 00:12:22.650 altname ens801f1np1 00:12:22.650 inet 192.168.100.9/24 scope global cvl_0_1 00:12:22.650 valid_lft forever preferred_lft forever 00:12:22.650 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:22.650 valid_lft forever preferred_lft forever 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:22.650 192.168.100.9' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:22.650 192.168.100.9' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:22.650 192.168.100.9' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3979892 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3979892 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 3979892 ']' 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:22.650 10:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.650 [2024-06-10 10:39:50.693285] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:22.650 [2024-06-10 10:39:50.693334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.650 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.650 [2024-06-10 10:39:50.755965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.650 [2024-06-10 10:39:50.829754] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.650 [2024-06-10 10:39:50.829793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.650 [2024-06-10 10:39:50.829803] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.650 [2024-06-10 10:39:50.829809] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.650 [2024-06-10 10:39:50.829813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.650 [2024-06-10 10:39:50.829885] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.650 [2024-06-10 10:39:50.830002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.650 [2024-06-10 10:39:50.830028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.650 [2024-06-10 10:39:50.830029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.650 10:39:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:22.650 10:39:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:12:22.650 10:39:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.650 10:39:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:22.651 10:39:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.651 10:39:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.651 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:22.651 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.651 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:22.651 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:22.651 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:22.910 "nvmf_tgt_1" 00:12:22.910 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:22.910 "nvmf_tgt_2" 00:12:22.910 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:22.910 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:23.168 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:23.168 10:39:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:23.168 true 00:12:23.168 10:39:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:23.168 true 00:12:23.168 10:39:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.168 10:39:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:23.426 rmmod nvme_rdma 00:12:23.426 rmmod nvme_fabrics 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3979892 ']' 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3979892 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 3979892 ']' 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 3979892 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3979892 00:12:23.426 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:23.427 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:23.427 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3979892' 00:12:23.427 killing process with pid 3979892 00:12:23.427 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 3979892 00:12:23.427 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 3979892 00:12:23.685 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:23.685 10:39:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:23.685 00:12:23.685 real 0m7.712s 00:12:23.685 user 0m9.176s 00:12:23.685 sys 0m4.709s 00:12:23.685 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:23.685 10:39:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.685 ************************************ 00:12:23.685 END TEST nvmf_multitarget 00:12:23.685 ************************************ 00:12:23.685 10:39:52 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:23.685 10:39:52 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:23.685 10:39:52 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:23.685 10:39:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:23.685 ************************************ 00:12:23.685 START TEST nvmf_rpc 00:12:23.685 ************************************ 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:23.686 * Looking for test storage... 00:12:23.686 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.686 10:39:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.945 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.945 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.945 10:39:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.945 10:39:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:30.516 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:30.516 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@377 -- # modinfo irdma 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:30.516 Found net devices under 0000:af:00.0: cvl_0_0 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:30.516 Found net devices under 0000:af:00.1: cvl_0_1 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:30.516 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:12:30.517 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:30.517 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:30.517 altname enp175s0f0np0 00:12:30.517 altname ens801f0np0 00:12:30.517 inet 192.168.100.8/24 scope global cvl_0_0 00:12:30.517 valid_lft forever preferred_lft forever 00:12:30.517 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:30.517 valid_lft forever preferred_lft forever 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:12:30.517 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:30.517 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:30.517 altname enp175s0f1np1 00:12:30.517 altname ens801f1np1 00:12:30.517 inet 192.168.100.9/24 scope global cvl_0_1 00:12:30.517 valid_lft forever preferred_lft forever 00:12:30.517 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:30.517 valid_lft forever preferred_lft forever 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:30.517 192.168.100.9' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:30.517 192.168.100.9' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:30.517 192.168.100.9' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3983583 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3983583 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 3983583 ']' 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:30.517 10:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.517 [2024-06-10 10:39:58.765933] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:30.517 [2024-06-10 10:39:58.765995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.517 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.517 [2024-06-10 10:39:58.823015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.517 [2024-06-10 10:39:58.901216] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.517 [2024-06-10 10:39:58.901254] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.517 [2024-06-10 10:39:58.901260] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.517 [2024-06-10 10:39:58.901266] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.518 [2024-06-10 10:39:58.901271] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.518 [2024-06-10 10:39:58.901333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.518 [2024-06-10 10:39:58.901429] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.518 [2024-06-10 10:39:58.901515] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.518 [2024-06-10 10:39:58.901516] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:30.777 "tick_rate": 2100000000, 00:12:30.777 "poll_groups": [ 00:12:30.777 { 00:12:30.777 "name": "nvmf_tgt_poll_group_000", 00:12:30.777 "admin_qpairs": 0, 00:12:30.777 "io_qpairs": 0, 00:12:30.777 "current_admin_qpairs": 0, 00:12:30.777 "current_io_qpairs": 0, 00:12:30.777 "pending_bdev_io": 0, 00:12:30.777 "completed_nvme_io": 0, 00:12:30.777 "transports": [] 00:12:30.777 }, 00:12:30.777 { 00:12:30.777 "name": "nvmf_tgt_poll_group_001", 00:12:30.777 "admin_qpairs": 0, 00:12:30.777 "io_qpairs": 0, 00:12:30.777 "current_admin_qpairs": 0, 00:12:30.777 "current_io_qpairs": 0, 00:12:30.777 "pending_bdev_io": 0, 00:12:30.777 "completed_nvme_io": 0, 00:12:30.777 "transports": [] 00:12:30.777 }, 00:12:30.777 { 00:12:30.777 "name": "nvmf_tgt_poll_group_002", 00:12:30.777 "admin_qpairs": 0, 00:12:30.777 "io_qpairs": 0, 00:12:30.777 "current_admin_qpairs": 0, 00:12:30.777 "current_io_qpairs": 0, 00:12:30.777 "pending_bdev_io": 0, 00:12:30.777 "completed_nvme_io": 0, 00:12:30.777 "transports": [] 00:12:30.777 }, 00:12:30.777 { 00:12:30.777 "name": "nvmf_tgt_poll_group_003", 00:12:30.777 "admin_qpairs": 0, 00:12:30.777 "io_qpairs": 0, 00:12:30.777 "current_admin_qpairs": 0, 00:12:30.777 "current_io_qpairs": 0, 00:12:30.777 "pending_bdev_io": 0, 00:12:30.777 "completed_nvme_io": 0, 00:12:30.777 "transports": [] 00:12:30.777 } 00:12:30.777 ] 00:12:30.777 }' 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:30.777 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.778 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.778 [2024-06-10 10:39:59.754554] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x18df900/0x18def40) succeed. 00:12:30.778 [2024-06-10 10:39:59.763376] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x18e0c70/0x18df4c0) succeed. 00:12:30.778 [2024-06-10 10:39:59.763397] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:30.778 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.778 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:30.778 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:30.778 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.778 10:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:30.778 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:30.778 "tick_rate": 2100000000, 00:12:30.778 "poll_groups": [ 00:12:30.778 { 00:12:30.778 "name": "nvmf_tgt_poll_group_000", 00:12:30.778 "admin_qpairs": 0, 00:12:30.778 "io_qpairs": 0, 00:12:30.778 "current_admin_qpairs": 0, 00:12:30.778 "current_io_qpairs": 0, 00:12:30.778 "pending_bdev_io": 0, 00:12:30.778 "completed_nvme_io": 0, 00:12:30.778 "transports": [ 00:12:30.778 { 00:12:30.778 "trtype": "RDMA", 00:12:30.778 "pending_data_buffer": 0, 00:12:30.778 "devices": [ 00:12:30.778 { 00:12:30.778 "name": "rocep175s0f0", 00:12:30.778 "polls": 1629, 00:12:30.778 "idle_polls": 1629, 00:12:30.778 "completions": 0, 00:12:30.778 "requests": 0, 00:12:30.778 "request_latency": 0, 00:12:30.778 "pending_free_request": 0, 00:12:30.778 "pending_rdma_read": 0, 00:12:30.778 "pending_rdma_write": 0, 00:12:30.778 "pending_rdma_send": 0, 00:12:30.778 "total_send_wrs": 0, 00:12:30.778 "send_doorbell_updates": 0, 00:12:30.778 "total_recv_wrs": 0, 00:12:30.778 "recv_doorbell_updates": 0 00:12:30.778 }, 00:12:30.778 { 00:12:30.778 "name": "rocep175s0f1", 00:12:30.778 "polls": 1629, 00:12:30.778 "idle_polls": 1629, 00:12:30.778 "completions": 0, 00:12:30.778 "requests": 0, 00:12:30.778 "request_latency": 0, 00:12:30.778 "pending_free_request": 0, 00:12:30.778 "pending_rdma_read": 0, 00:12:30.778 "pending_rdma_write": 0, 00:12:30.778 "pending_rdma_send": 0, 00:12:30.778 "total_send_wrs": 0, 00:12:30.778 "send_doorbell_updates": 0, 00:12:30.778 "total_recv_wrs": 0, 00:12:30.778 "recv_doorbell_updates": 0 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 }, 00:12:30.778 { 00:12:30.778 "name": "nvmf_tgt_poll_group_001", 00:12:30.778 "admin_qpairs": 0, 00:12:30.778 "io_qpairs": 0, 00:12:30.778 "current_admin_qpairs": 0, 00:12:30.778 "current_io_qpairs": 0, 00:12:30.778 "pending_bdev_io": 0, 00:12:30.778 "completed_nvme_io": 0, 00:12:30.778 "transports": [ 00:12:30.778 { 00:12:30.778 "trtype": "RDMA", 00:12:30.778 "pending_data_buffer": 0, 00:12:30.778 "devices": [ 00:12:30.778 { 00:12:30.778 "name": "rocep175s0f0", 00:12:30.778 "polls": 1541, 00:12:30.778 "idle_polls": 1541, 00:12:30.778 "completions": 0, 00:12:30.778 "requests": 0, 00:12:30.778 "request_latency": 0, 00:12:30.778 "pending_free_request": 0, 00:12:30.778 "pending_rdma_read": 0, 00:12:30.778 "pending_rdma_write": 0, 00:12:30.778 "pending_rdma_send": 0, 00:12:30.778 "total_send_wrs": 0, 00:12:30.778 "send_doorbell_updates": 0, 00:12:30.778 "total_recv_wrs": 0, 00:12:30.778 "recv_doorbell_updates": 0 00:12:30.778 }, 00:12:30.778 { 00:12:30.778 "name": "rocep175s0f1", 00:12:30.778 "polls": 1541, 00:12:30.778 "idle_polls": 1541, 00:12:30.778 "completions": 0, 00:12:30.778 "requests": 0, 00:12:30.778 "request_latency": 0, 00:12:30.778 "pending_free_request": 0, 00:12:30.778 "pending_rdma_read": 0, 00:12:30.778 "pending_rdma_write": 0, 00:12:30.778 "pending_rdma_send": 0, 00:12:30.778 "total_send_wrs": 0, 00:12:30.778 "send_doorbell_updates": 0, 00:12:30.778 "total_recv_wrs": 0, 00:12:30.778 "recv_doorbell_updates": 0 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 }, 00:12:30.778 { 00:12:30.778 "name": "nvmf_tgt_poll_group_002", 00:12:30.778 "admin_qpairs": 0, 00:12:30.778 "io_qpairs": 0, 00:12:30.778 "current_admin_qpairs": 0, 00:12:30.778 "current_io_qpairs": 0, 00:12:30.778 "pending_bdev_io": 0, 00:12:30.778 "completed_nvme_io": 0, 00:12:30.778 "transports": [ 00:12:30.778 { 00:12:30.778 "trtype": "RDMA", 00:12:30.778 "pending_data_buffer": 0, 00:12:30.778 "devices": [ 00:12:30.778 { 00:12:30.778 "name": "rocep175s0f0", 00:12:30.778 "polls": 1451, 00:12:30.778 "idle_polls": 1451, 00:12:30.778 "completions": 0, 00:12:30.778 "requests": 0, 00:12:30.778 "request_latency": 0, 00:12:30.778 "pending_free_request": 0, 00:12:30.778 "pending_rdma_read": 0, 00:12:30.778 "pending_rdma_write": 0, 00:12:30.778 "pending_rdma_send": 0, 00:12:30.778 "total_send_wrs": 0, 00:12:30.778 "send_doorbell_updates": 0, 00:12:30.778 "total_recv_wrs": 0, 00:12:30.778 "recv_doorbell_updates": 0 00:12:30.778 }, 00:12:30.778 { 00:12:30.778 "name": "rocep175s0f1", 00:12:30.778 "polls": 1451, 00:12:30.778 "idle_polls": 1451, 00:12:30.778 "completions": 0, 00:12:30.778 "requests": 0, 00:12:30.778 "request_latency": 0, 00:12:30.778 "pending_free_request": 0, 00:12:30.778 "pending_rdma_read": 0, 00:12:30.778 "pending_rdma_write": 0, 00:12:30.778 "pending_rdma_send": 0, 00:12:30.778 "total_send_wrs": 0, 00:12:30.778 "send_doorbell_updates": 0, 00:12:30.778 "total_recv_wrs": 0, 00:12:30.778 "recv_doorbell_updates": 0 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 }, 00:12:30.778 { 00:12:30.778 "name": "nvmf_tgt_poll_group_003", 00:12:30.778 "admin_qpairs": 0, 00:12:30.778 "io_qpairs": 0, 00:12:30.778 "current_admin_qpairs": 0, 00:12:30.778 "current_io_qpairs": 0, 00:12:30.778 "pending_bdev_io": 0, 00:12:30.778 "completed_nvme_io": 0, 00:12:30.778 "transports": [ 00:12:30.778 { 00:12:30.778 "trtype": "RDMA", 00:12:30.778 "pending_data_buffer": 0, 00:12:30.778 "devices": [ 00:12:30.778 { 00:12:30.778 "name": "rocep175s0f0", 00:12:30.778 "polls": 1034, 00:12:30.778 "idle_polls": 1034, 00:12:30.778 "completions": 0, 00:12:30.778 "requests": 0, 00:12:30.778 "request_latency": 0, 00:12:30.778 "pending_free_request": 0, 00:12:30.778 "pending_rdma_read": 0, 00:12:30.778 "pending_rdma_write": 0, 00:12:30.778 "pending_rdma_send": 0, 00:12:30.778 "total_send_wrs": 0, 00:12:30.778 "send_doorbell_updates": 0, 00:12:30.778 "total_recv_wrs": 0, 00:12:30.778 "recv_doorbell_updates": 0 00:12:30.778 }, 00:12:30.778 { 00:12:30.778 "name": "rocep175s0f1", 00:12:30.778 "polls": 1034, 00:12:30.778 "idle_polls": 1034, 00:12:30.778 "completions": 0, 00:12:30.778 "requests": 0, 00:12:30.778 "request_latency": 0, 00:12:30.778 "pending_free_request": 0, 00:12:30.778 "pending_rdma_read": 0, 00:12:30.778 "pending_rdma_write": 0, 00:12:30.778 "pending_rdma_send": 0, 00:12:30.778 "total_send_wrs": 0, 00:12:30.778 "send_doorbell_updates": 0, 00:12:30.778 "total_recv_wrs": 0, 00:12:30.778 "recv_doorbell_updates": 0 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 } 00:12:30.778 ] 00:12:30.778 }' 00:12:31.063 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:31.063 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:31.063 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:31.063 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:31.063 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:31.063 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:31.063 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:12:31.064 10:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 Malloc1 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.064 [2024-06-10 10:40:00.067337] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:31.064 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:31.336 [2024-06-10 10:40:00.104289] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:31.336 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:31.336 could not add new controller: failed to write to nvme-fabrics device 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.336 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:31.595 10:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.595 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:31.595 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.595 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:31.595 10:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:33.498 10:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:33.498 10:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:33.498 10:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.498 10:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:33.498 10:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.498 10:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:33.498 10:40:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:34.432 [2024-06-10 10:40:03.323745] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:34.432 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:34.432 could not add new controller: failed to write to nvme-fabrics device 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:34.432 10:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:34.690 10:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.690 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:34.690 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.690 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:34.690 10:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:36.592 10:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:36.850 10:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.850 10:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.850 10:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:36.850 10:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.850 10:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:36.850 10:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.787 [2024-06-10 10:40:06.538319] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:37.787 10:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:40.322 10:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:40.322 10:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:40.322 10:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.322 10:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:40.322 10:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.322 10:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:40.322 10:40:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 [2024-06-10 10:40:09.687684] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:40.890 10:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.891 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:40.891 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.891 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:40.891 10:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:43.434 10:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:43.434 10:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:43.434 10:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.434 10:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:43.434 10:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.434 10:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:43.434 10:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.002 [2024-06-10 10:40:12.828548] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.002 10:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:44.261 10:40:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.262 10:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:44.262 10:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.262 10:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:44.262 10:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:46.165 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:46.165 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:46.165 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.165 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:46.165 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.165 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:46.165 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.103 [2024-06-10 10:40:15.982457] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:47.103 10:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.104 10:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:47.104 10:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:47.363 10:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.363 10:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:47.363 10:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.363 10:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:47.363 10:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:49.291 10:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:49.291 10:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:49.291 10:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.291 10:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:49.291 10:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.291 10:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:49.291 10:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.230 [2024-06-10 10:40:19.139675] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:50.230 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:50.547 10:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.547 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:50.547 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.547 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:50.547 10:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:52.453 10:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:52.453 10:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:52.453 10:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.453 10:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:52.453 10:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.453 10:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:52.453 10:40:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 [2024-06-10 10:40:22.297319] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 [2024-06-10 10:40:22.349493] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 [2024-06-10 10:40:22.401701] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.391 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.650 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 [2024-06-10 10:40:22.449877] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 [2024-06-10 10:40:22.498065] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:53.651 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:53.651 "tick_rate": 2100000000, 00:12:53.651 "poll_groups": [ 00:12:53.651 { 00:12:53.651 "name": "nvmf_tgt_poll_group_000", 00:12:53.651 "admin_qpairs": 2, 00:12:53.651 "io_qpairs": 27, 00:12:53.651 "current_admin_qpairs": 0, 00:12:53.651 "current_io_qpairs": 0, 00:12:53.651 "pending_bdev_io": 0, 00:12:53.651 "completed_nvme_io": 128, 00:12:53.651 "transports": [ 00:12:53.651 { 00:12:53.651 "trtype": "RDMA", 00:12:53.651 "pending_data_buffer": 0, 00:12:53.651 "devices": [ 00:12:53.651 { 00:12:53.651 "name": "rocep175s0f0", 00:12:53.651 "polls": 2768581, 00:12:53.651 "idle_polls": 2768140, 00:12:53.651 "completions": 3911, 00:12:53.651 "requests": 3730, 00:12:53.651 "request_latency": 422411568, 00:12:53.651 "pending_free_request": 0, 00:12:53.651 "pending_rdma_read": 0, 00:12:53.651 "pending_rdma_write": 0, 00:12:53.651 "pending_rdma_send": 0, 00:12:53.651 "total_send_wrs": 307, 00:12:53.651 "send_doorbell_updates": 161, 00:12:53.651 "total_recv_wrs": 3730, 00:12:53.651 "recv_doorbell_updates": 188 00:12:53.651 }, 00:12:53.651 { 00:12:53.651 "name": "rocep175s0f1", 00:12:53.651 "polls": 2768581, 00:12:53.651 "idle_polls": 2768581, 00:12:53.651 "completions": 0, 00:12:53.651 "requests": 0, 00:12:53.651 "request_latency": 0, 00:12:53.651 "pending_free_request": 0, 00:12:53.651 "pending_rdma_read": 0, 00:12:53.651 "pending_rdma_write": 0, 00:12:53.651 "pending_rdma_send": 0, 00:12:53.651 "total_send_wrs": 0, 00:12:53.651 "send_doorbell_updates": 0, 00:12:53.651 "total_recv_wrs": 0, 00:12:53.651 "recv_doorbell_updates": 0 00:12:53.651 } 00:12:53.651 ] 00:12:53.651 } 00:12:53.651 ] 00:12:53.651 }, 00:12:53.651 { 00:12:53.651 "name": "nvmf_tgt_poll_group_001", 00:12:53.651 "admin_qpairs": 2, 00:12:53.651 "io_qpairs": 26, 00:12:53.651 "current_admin_qpairs": 0, 00:12:53.651 "current_io_qpairs": 0, 00:12:53.651 "pending_bdev_io": 0, 00:12:53.651 "completed_nvme_io": 125, 00:12:53.651 "transports": [ 00:12:53.651 { 00:12:53.651 "trtype": "RDMA", 00:12:53.651 "pending_data_buffer": 0, 00:12:53.651 "devices": [ 00:12:53.651 { 00:12:53.651 "name": "rocep175s0f0", 00:12:53.651 "polls": 2803818, 00:12:53.651 "idle_polls": 2803393, 00:12:53.651 "completions": 3740, 00:12:53.651 "requests": 3565, 00:12:53.651 "request_latency": 403864836, 00:12:53.651 "pending_free_request": 0, 00:12:53.651 "pending_rdma_read": 0, 00:12:53.651 "pending_rdma_write": 0, 00:12:53.651 "pending_rdma_send": 0, 00:12:53.651 "total_send_wrs": 298, 00:12:53.651 "send_doorbell_updates": 150, 00:12:53.651 "total_recv_wrs": 3565, 00:12:53.651 "recv_doorbell_updates": 176 00:12:53.651 }, 00:12:53.651 { 00:12:53.651 "name": "rocep175s0f1", 00:12:53.651 "polls": 2803818, 00:12:53.651 "idle_polls": 2803818, 00:12:53.651 "completions": 0, 00:12:53.651 "requests": 0, 00:12:53.651 "request_latency": 0, 00:12:53.651 "pending_free_request": 0, 00:12:53.651 "pending_rdma_read": 0, 00:12:53.651 "pending_rdma_write": 0, 00:12:53.651 "pending_rdma_send": 0, 00:12:53.651 "total_send_wrs": 0, 00:12:53.651 "send_doorbell_updates": 0, 00:12:53.651 "total_recv_wrs": 0, 00:12:53.651 "recv_doorbell_updates": 0 00:12:53.651 } 00:12:53.651 ] 00:12:53.651 } 00:12:53.651 ] 00:12:53.651 }, 00:12:53.651 { 00:12:53.651 "name": "nvmf_tgt_poll_group_002", 00:12:53.651 "admin_qpairs": 1, 00:12:53.651 "io_qpairs": 26, 00:12:53.651 "current_admin_qpairs": 0, 00:12:53.651 "current_io_qpairs": 0, 00:12:53.651 "pending_bdev_io": 0, 00:12:53.651 "completed_nvme_io": 125, 00:12:53.651 "transports": [ 00:12:53.651 { 00:12:53.651 "trtype": "RDMA", 00:12:53.651 "pending_data_buffer": 0, 00:12:53.651 "devices": [ 00:12:53.651 { 00:12:53.651 "name": "rocep175s0f0", 00:12:53.651 "polls": 2733417, 00:12:53.651 "idle_polls": 2733042, 00:12:53.651 "completions": 3694, 00:12:53.651 "requests": 3542, 00:12:53.651 "request_latency": 401670444, 00:12:53.651 "pending_free_request": 0, 00:12:53.651 "pending_rdma_read": 0, 00:12:53.651 "pending_rdma_write": 0, 00:12:53.651 "pending_rdma_send": 0, 00:12:53.651 "total_send_wrs": 263, 00:12:53.651 "send_doorbell_updates": 128, 00:12:53.651 "total_recv_wrs": 3542, 00:12:53.651 "recv_doorbell_updates": 154 00:12:53.651 }, 00:12:53.651 { 00:12:53.651 "name": "rocep175s0f1", 00:12:53.651 "polls": 2733417, 00:12:53.651 "idle_polls": 2733417, 00:12:53.651 "completions": 0, 00:12:53.651 "requests": 0, 00:12:53.651 "request_latency": 0, 00:12:53.651 "pending_free_request": 0, 00:12:53.651 "pending_rdma_read": 0, 00:12:53.651 "pending_rdma_write": 0, 00:12:53.651 "pending_rdma_send": 0, 00:12:53.651 "total_send_wrs": 0, 00:12:53.651 "send_doorbell_updates": 0, 00:12:53.651 "total_recv_wrs": 0, 00:12:53.651 "recv_doorbell_updates": 0 00:12:53.651 } 00:12:53.651 ] 00:12:53.651 } 00:12:53.651 ] 00:12:53.651 }, 00:12:53.651 { 00:12:53.651 "name": "nvmf_tgt_poll_group_003", 00:12:53.651 "admin_qpairs": 2, 00:12:53.651 "io_qpairs": 26, 00:12:53.651 "current_admin_qpairs": 0, 00:12:53.651 "current_io_qpairs": 0, 00:12:53.651 "pending_bdev_io": 0, 00:12:53.651 "completed_nvme_io": 77, 00:12:53.651 "transports": [ 00:12:53.651 { 00:12:53.651 "trtype": "RDMA", 00:12:53.651 "pending_data_buffer": 0, 00:12:53.651 "devices": [ 00:12:53.651 { 00:12:53.651 "name": "rocep175s0f0", 00:12:53.651 "polls": 2156930, 00:12:53.651 "idle_polls": 2156589, 00:12:53.651 "completions": 3644, 00:12:53.651 "requests": 3517, 00:12:53.651 "request_latency": 401718692, 00:12:53.652 "pending_free_request": 0, 00:12:53.652 "pending_rdma_read": 0, 00:12:53.652 "pending_rdma_write": 0, 00:12:53.652 "pending_rdma_send": 0, 00:12:53.652 "total_send_wrs": 202, 00:12:53.652 "send_doorbell_updates": 114, 00:12:53.652 "total_recv_wrs": 3517, 00:12:53.652 "recv_doorbell_updates": 140 00:12:53.652 }, 00:12:53.652 { 00:12:53.652 "name": "rocep175s0f1", 00:12:53.652 "polls": 2156930, 00:12:53.652 "idle_polls": 2156930, 00:12:53.652 "completions": 0, 00:12:53.652 "requests": 0, 00:12:53.652 "request_latency": 0, 00:12:53.652 "pending_free_request": 0, 00:12:53.652 "pending_rdma_read": 0, 00:12:53.652 "pending_rdma_write": 0, 00:12:53.652 "pending_rdma_send": 0, 00:12:53.652 "total_send_wrs": 0, 00:12:53.652 "send_doorbell_updates": 0, 00:12:53.652 "total_recv_wrs": 0, 00:12:53.652 "recv_doorbell_updates": 0 00:12:53.652 } 00:12:53.652 ] 00:12:53.652 } 00:12:53.652 ] 00:12:53.652 } 00:12:53.652 ] 00:12:53.652 }' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:12:53.652 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 14989 > 0 )) 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 1629665540 > 0 )) 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:53.911 rmmod nvme_rdma 00:12:53.911 rmmod nvme_fabrics 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3983583 ']' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3983583 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 3983583 ']' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 3983583 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3983583 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3983583' 00:12:53.911 killing process with pid 3983583 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 3983583 00:12:53.911 10:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 3983583 00:12:54.170 10:40:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:54.170 10:40:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:54.170 00:12:54.170 real 0m30.443s 00:12:54.170 user 1m38.542s 00:12:54.170 sys 0m6.053s 00:12:54.170 10:40:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:54.170 10:40:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.170 ************************************ 00:12:54.170 END TEST nvmf_rpc 00:12:54.170 ************************************ 00:12:54.170 10:40:23 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:54.170 10:40:23 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:54.170 10:40:23 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:54.170 10:40:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:54.170 ************************************ 00:12:54.170 START TEST nvmf_invalid 00:12:54.170 ************************************ 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:54.170 * Looking for test storage... 00:12:54.170 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.170 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.430 10:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.996 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.996 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.996 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.996 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.996 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.996 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.996 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:00.997 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:00.997 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@377 -- # modinfo irdma 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:00.997 Found net devices under 0000:af:00.0: cvl_0_0 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:00.997 Found net devices under 0000:af:00.1: cvl_0_1 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:13:00.997 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:00.997 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:00.997 altname enp175s0f0np0 00:13:00.997 altname ens801f0np0 00:13:00.997 inet 192.168.100.8/24 scope global cvl_0_0 00:13:00.997 valid_lft forever preferred_lft forever 00:13:00.997 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:00.997 valid_lft forever preferred_lft forever 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:00.997 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:13:00.998 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:00.998 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:00.998 altname enp175s0f1np1 00:13:00.998 altname ens801f1np1 00:13:00.998 inet 192.168.100.9/24 scope global cvl_0_1 00:13:00.998 valid_lft forever preferred_lft forever 00:13:00.998 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:00.998 valid_lft forever preferred_lft forever 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:00.998 192.168.100.9' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:00.998 192.168.100.9' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:00.998 192.168.100.9' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3991066 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3991066 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 3991066 ']' 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.998 10:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.998 [2024-06-10 10:40:29.013935] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:00.998 [2024-06-10 10:40:29.013989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.998 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.998 [2024-06-10 10:40:29.074594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.998 [2024-06-10 10:40:29.153488] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.998 [2024-06-10 10:40:29.153527] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.998 [2024-06-10 10:40:29.153534] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.998 [2024-06-10 10:40:29.153540] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.998 [2024-06-10 10:40:29.153545] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.998 [2024-06-10 10:40:29.153589] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.998 [2024-06-10 10:40:29.153605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.998 [2024-06-10 10:40:29.153697] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.998 [2024-06-10 10:40:29.153698] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.998 10:40:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:00.998 10:40:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:13:00.998 10:40:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.998 10:40:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:00.998 10:40:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.998 10:40:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.998 10:40:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:00.998 10:40:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28791 00:13:00.998 [2024-06-10 10:40:30.001231] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:01.257 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:01.257 { 00:13:01.257 "nqn": "nqn.2016-06.io.spdk:cnode28791", 00:13:01.257 "tgt_name": "foobar", 00:13:01.257 "method": "nvmf_create_subsystem", 00:13:01.257 "req_id": 1 00:13:01.257 } 00:13:01.257 Got JSON-RPC error response 00:13:01.257 response: 00:13:01.257 { 00:13:01.257 "code": -32603, 00:13:01.257 "message": "Unable to find target foobar" 00:13:01.257 }' 00:13:01.257 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:01.257 { 00:13:01.257 "nqn": "nqn.2016-06.io.spdk:cnode28791", 00:13:01.257 "tgt_name": "foobar", 00:13:01.257 "method": "nvmf_create_subsystem", 00:13:01.257 "req_id": 1 00:13:01.257 } 00:13:01.257 Got JSON-RPC error response 00:13:01.257 response: 00:13:01.257 { 00:13:01.257 "code": -32603, 00:13:01.257 "message": "Unable to find target foobar" 00:13:01.257 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:01.257 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:01.257 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29207 00:13:01.257 [2024-06-10 10:40:30.181872] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29207: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:01.258 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:01.258 { 00:13:01.258 "nqn": "nqn.2016-06.io.spdk:cnode29207", 00:13:01.258 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:01.258 "method": "nvmf_create_subsystem", 00:13:01.258 "req_id": 1 00:13:01.258 } 00:13:01.258 Got JSON-RPC error response 00:13:01.258 response: 00:13:01.258 { 00:13:01.258 "code": -32602, 00:13:01.258 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:01.258 }' 00:13:01.258 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:01.258 { 00:13:01.258 "nqn": "nqn.2016-06.io.spdk:cnode29207", 00:13:01.258 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:01.258 "method": "nvmf_create_subsystem", 00:13:01.258 "req_id": 1 00:13:01.258 } 00:13:01.258 Got JSON-RPC error response 00:13:01.258 response: 00:13:01.258 { 00:13:01.258 "code": -32602, 00:13:01.258 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:01.258 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:01.258 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:01.258 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30078 00:13:01.516 [2024-06-10 10:40:30.366468] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30078: invalid model number 'SPDK_Controller' 00:13:01.516 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:01.516 { 00:13:01.516 "nqn": "nqn.2016-06.io.spdk:cnode30078", 00:13:01.516 "model_number": "SPDK_Controller\u001f", 00:13:01.516 "method": "nvmf_create_subsystem", 00:13:01.516 "req_id": 1 00:13:01.516 } 00:13:01.516 Got JSON-RPC error response 00:13:01.516 response: 00:13:01.516 { 00:13:01.516 "code": -32602, 00:13:01.516 "message": "Invalid MN SPDK_Controller\u001f" 00:13:01.516 }' 00:13:01.516 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:01.516 { 00:13:01.516 "nqn": "nqn.2016-06.io.spdk:cnode30078", 00:13:01.516 "model_number": "SPDK_Controller\u001f", 00:13:01.516 "method": "nvmf_create_subsystem", 00:13:01.516 "req_id": 1 00:13:01.516 } 00:13:01.516 Got JSON-RPC error response 00:13:01.516 response: 00:13:01.516 { 00:13:01.516 "code": -32602, 00:13:01.516 "message": "Invalid MN SPDK_Controller\u001f" 00:13:01.516 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:01.516 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:01.516 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'cC]ox&J;$Un>d84LI@QE' 00:13:01.517 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'cC]ox&J;$Un>d84LI@QE' nqn.2016-06.io.spdk:cnode24928 00:13:01.776 [2024-06-10 10:40:30.695555] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24928: invalid serial number 'cC]ox&J;$Un>d84LI@QE' 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:01.776 { 00:13:01.776 "nqn": "nqn.2016-06.io.spdk:cnode24928", 00:13:01.776 "serial_number": "cC]ox&J;$Un>d84LI@\u007fQE", 00:13:01.776 "method": "nvmf_create_subsystem", 00:13:01.776 "req_id": 1 00:13:01.776 } 00:13:01.776 Got JSON-RPC error response 00:13:01.776 response: 00:13:01.776 { 00:13:01.776 "code": -32602, 00:13:01.776 "message": "Invalid SN cC]ox&J;$Un>d84LI@\u007fQE" 00:13:01.776 }' 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:01.776 { 00:13:01.776 "nqn": "nqn.2016-06.io.spdk:cnode24928", 00:13:01.776 "serial_number": "cC]ox&J;$Un>d84LI@\u007fQE", 00:13:01.776 "method": "nvmf_create_subsystem", 00:13:01.776 "req_id": 1 00:13:01.776 } 00:13:01.776 Got JSON-RPC error response 00:13:01.776 response: 00:13:01.776 { 00:13:01.776 "code": -32602, 00:13:01.776 "message": "Invalid SN cC]ox&J;$Un>d84LI@\u007fQE" 00:13:01.776 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.776 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.777 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:02.036 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.037 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '$3BV_ib2jE?t?YMV{;.+Hc__+iI6#"H*|`}Bh+^72' 00:13:02.038 10:40:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$3BV_ib2jE?t?YMV{;.+Hc__+iI6#"H*|`}Bh+^72' nqn.2016-06.io.spdk:cnode24169 00:13:02.296 [2024-06-10 10:40:31.137066] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24169: invalid model number '$3BV_ib2jE?t?YMV{;.+Hc__+iI6#"H*|`}Bh+^72' 00:13:02.296 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:02.297 { 00:13:02.297 "nqn": "nqn.2016-06.io.spdk:cnode24169", 00:13:02.297 "model_number": "$3BV_ib2jE?t?YMV{;.+Hc__+iI6#\"H*|`}Bh+^72", 00:13:02.297 "method": "nvmf_create_subsystem", 00:13:02.297 "req_id": 1 00:13:02.297 } 00:13:02.297 Got JSON-RPC error response 00:13:02.297 response: 00:13:02.297 { 00:13:02.297 "code": -32602, 00:13:02.297 "message": "Invalid MN $3BV_ib2jE?t?YMV{;.+Hc__+iI6#\"H*|`}Bh+^72" 00:13:02.297 }' 00:13:02.297 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:02.297 { 00:13:02.297 "nqn": "nqn.2016-06.io.spdk:cnode24169", 00:13:02.297 "model_number": "$3BV_ib2jE?t?YMV{;.+Hc__+iI6#\"H*|`}Bh+^72", 00:13:02.297 "method": "nvmf_create_subsystem", 00:13:02.297 "req_id": 1 00:13:02.297 } 00:13:02.297 Got JSON-RPC error response 00:13:02.297 response: 00:13:02.297 { 00:13:02.297 "code": -32602, 00:13:02.297 "message": "Invalid MN $3BV_ib2jE?t?YMV{;.+Hc__+iI6#\"H*|`}Bh+^72" 00:13:02.297 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:02.297 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:13:02.555 [2024-06-10 10:40:31.331009] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x169a190/0x16997d0) succeed. 00:13:02.555 [2024-06-10 10:40:31.339784] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x169b540/0x1699d50) succeed. 00:13:02.555 [2024-06-10 10:40:31.339804] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:02.555 [2024-06-10 10:40:31.341964] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:13:02.555 [2024-06-10 10:40:31.341986] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:13:02.555 [2024-06-10 10:40:31.342413] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:13:02.555 [2024-06-10 10:40:31.343437] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:13:02.555 [2024-06-10 10:40:31.343456] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:13:02.555 [2024-06-10 10:40:31.343880] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:13:02.555 [2024-06-10 10:40:31.344875] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 257/767 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:13:02.555 [2024-06-10 10:40:31.344890] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:13:02.555 [2024-06-10 10:40:31.345319] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:13:02.555 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:02.555 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:13:02.555 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:13:02.555 192.168.100.9' 00:13:02.555 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:02.555 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:13:02.555 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:13:02.814 [2024-06-10 10:40:31.709303] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:02.814 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:02.814 { 00:13:02.814 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:02.814 "listen_address": { 00:13:02.814 "trtype": "rdma", 00:13:02.814 "traddr": "192.168.100.8", 00:13:02.814 "trsvcid": "4421" 00:13:02.814 }, 00:13:02.814 "method": "nvmf_subsystem_remove_listener", 00:13:02.814 "req_id": 1 00:13:02.814 } 00:13:02.814 Got JSON-RPC error response 00:13:02.814 response: 00:13:02.814 { 00:13:02.814 "code": -32602, 00:13:02.814 "message": "Invalid parameters" 00:13:02.814 }' 00:13:02.814 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:02.814 { 00:13:02.814 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:02.814 "listen_address": { 00:13:02.814 "trtype": "rdma", 00:13:02.814 "traddr": "192.168.100.8", 00:13:02.814 "trsvcid": "4421" 00:13:02.814 }, 00:13:02.814 "method": "nvmf_subsystem_remove_listener", 00:13:02.814 "req_id": 1 00:13:02.814 } 00:13:02.814 Got JSON-RPC error response 00:13:02.814 response: 00:13:02.814 { 00:13:02.814 "code": -32602, 00:13:02.814 "message": "Invalid parameters" 00:13:02.814 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:02.814 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10458 -i 0 00:13:03.073 [2024-06-10 10:40:31.901920] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10458: invalid cntlid range [0-65519] 00:13:03.073 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:03.073 { 00:13:03.073 "nqn": "nqn.2016-06.io.spdk:cnode10458", 00:13:03.073 "min_cntlid": 0, 00:13:03.073 "method": "nvmf_create_subsystem", 00:13:03.073 "req_id": 1 00:13:03.073 } 00:13:03.073 Got JSON-RPC error response 00:13:03.073 response: 00:13:03.073 { 00:13:03.073 "code": -32602, 00:13:03.073 "message": "Invalid cntlid range [0-65519]" 00:13:03.073 }' 00:13:03.073 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:03.073 { 00:13:03.073 "nqn": "nqn.2016-06.io.spdk:cnode10458", 00:13:03.073 "min_cntlid": 0, 00:13:03.073 "method": "nvmf_create_subsystem", 00:13:03.073 "req_id": 1 00:13:03.073 } 00:13:03.073 Got JSON-RPC error response 00:13:03.073 response: 00:13:03.073 { 00:13:03.073 "code": -32602, 00:13:03.073 "message": "Invalid cntlid range [0-65519]" 00:13:03.073 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.073 10:40:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29534 -i 65520 00:13:03.073 [2024-06-10 10:40:32.094655] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29534: invalid cntlid range [65520-65519] 00:13:03.331 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:03.331 { 00:13:03.331 "nqn": "nqn.2016-06.io.spdk:cnode29534", 00:13:03.331 "min_cntlid": 65520, 00:13:03.331 "method": "nvmf_create_subsystem", 00:13:03.331 "req_id": 1 00:13:03.331 } 00:13:03.331 Got JSON-RPC error response 00:13:03.331 response: 00:13:03.332 { 00:13:03.332 "code": -32602, 00:13:03.332 "message": "Invalid cntlid range [65520-65519]" 00:13:03.332 }' 00:13:03.332 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:03.332 { 00:13:03.332 "nqn": "nqn.2016-06.io.spdk:cnode29534", 00:13:03.332 "min_cntlid": 65520, 00:13:03.332 "method": "nvmf_create_subsystem", 00:13:03.332 "req_id": 1 00:13:03.332 } 00:13:03.332 Got JSON-RPC error response 00:13:03.332 response: 00:13:03.332 { 00:13:03.332 "code": -32602, 00:13:03.332 "message": "Invalid cntlid range [65520-65519]" 00:13:03.332 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.332 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18080 -I 0 00:13:03.332 [2024-06-10 10:40:32.267349] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18080: invalid cntlid range [1-0] 00:13:03.332 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:03.332 { 00:13:03.332 "nqn": "nqn.2016-06.io.spdk:cnode18080", 00:13:03.332 "max_cntlid": 0, 00:13:03.332 "method": "nvmf_create_subsystem", 00:13:03.332 "req_id": 1 00:13:03.332 } 00:13:03.332 Got JSON-RPC error response 00:13:03.332 response: 00:13:03.332 { 00:13:03.332 "code": -32602, 00:13:03.332 "message": "Invalid cntlid range [1-0]" 00:13:03.332 }' 00:13:03.332 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:03.332 { 00:13:03.332 "nqn": "nqn.2016-06.io.spdk:cnode18080", 00:13:03.332 "max_cntlid": 0, 00:13:03.332 "method": "nvmf_create_subsystem", 00:13:03.332 "req_id": 1 00:13:03.332 } 00:13:03.332 Got JSON-RPC error response 00:13:03.332 response: 00:13:03.332 { 00:13:03.332 "code": -32602, 00:13:03.332 "message": "Invalid cntlid range [1-0]" 00:13:03.332 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.332 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25415 -I 65520 00:13:03.590 [2024-06-10 10:40:32.452004] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25415: invalid cntlid range [1-65520] 00:13:03.590 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:03.590 { 00:13:03.590 "nqn": "nqn.2016-06.io.spdk:cnode25415", 00:13:03.590 "max_cntlid": 65520, 00:13:03.590 "method": "nvmf_create_subsystem", 00:13:03.590 "req_id": 1 00:13:03.590 } 00:13:03.590 Got JSON-RPC error response 00:13:03.590 response: 00:13:03.590 { 00:13:03.590 "code": -32602, 00:13:03.590 "message": "Invalid cntlid range [1-65520]" 00:13:03.590 }' 00:13:03.590 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:03.590 { 00:13:03.590 "nqn": "nqn.2016-06.io.spdk:cnode25415", 00:13:03.590 "max_cntlid": 65520, 00:13:03.590 "method": "nvmf_create_subsystem", 00:13:03.590 "req_id": 1 00:13:03.590 } 00:13:03.590 Got JSON-RPC error response 00:13:03.590 response: 00:13:03.590 { 00:13:03.590 "code": -32602, 00:13:03.590 "message": "Invalid cntlid range [1-65520]" 00:13:03.590 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.590 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11295 -i 6 -I 5 00:13:03.848 [2024-06-10 10:40:32.628659] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11295: invalid cntlid range [6-5] 00:13:03.848 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:03.848 { 00:13:03.848 "nqn": "nqn.2016-06.io.spdk:cnode11295", 00:13:03.848 "min_cntlid": 6, 00:13:03.848 "max_cntlid": 5, 00:13:03.848 "method": "nvmf_create_subsystem", 00:13:03.848 "req_id": 1 00:13:03.848 } 00:13:03.848 Got JSON-RPC error response 00:13:03.848 response: 00:13:03.848 { 00:13:03.848 "code": -32602, 00:13:03.848 "message": "Invalid cntlid range [6-5]" 00:13:03.848 }' 00:13:03.848 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:03.848 { 00:13:03.849 "nqn": "nqn.2016-06.io.spdk:cnode11295", 00:13:03.849 "min_cntlid": 6, 00:13:03.849 "max_cntlid": 5, 00:13:03.849 "method": "nvmf_create_subsystem", 00:13:03.849 "req_id": 1 00:13:03.849 } 00:13:03.849 Got JSON-RPC error response 00:13:03.849 response: 00:13:03.849 { 00:13:03.849 "code": -32602, 00:13:03.849 "message": "Invalid cntlid range [6-5]" 00:13:03.849 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:03.849 { 00:13:03.849 "name": "foobar", 00:13:03.849 "method": "nvmf_delete_target", 00:13:03.849 "req_id": 1 00:13:03.849 } 00:13:03.849 Got JSON-RPC error response 00:13:03.849 response: 00:13:03.849 { 00:13:03.849 "code": -32602, 00:13:03.849 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:03.849 }' 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:03.849 { 00:13:03.849 "name": "foobar", 00:13:03.849 "method": "nvmf_delete_target", 00:13:03.849 "req_id": 1 00:13:03.849 } 00:13:03.849 Got JSON-RPC error response 00:13:03.849 response: 00:13:03.849 { 00:13:03.849 "code": -32602, 00:13:03.849 "message": "The specified target doesn't exist, cannot delete it." 00:13:03.849 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:03.849 rmmod nvme_rdma 00:13:03.849 rmmod nvme_fabrics 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3991066 ']' 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3991066 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 3991066 ']' 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 3991066 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3991066 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3991066' 00:13:03.849 killing process with pid 3991066 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 3991066 00:13:03.849 10:40:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 3991066 00:13:04.108 10:40:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.108 10:40:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:04.108 00:13:04.108 real 0m9.927s 00:13:04.108 user 0m19.358s 00:13:04.108 sys 0m5.273s 00:13:04.108 10:40:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:04.108 10:40:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:04.108 ************************************ 00:13:04.108 END TEST nvmf_invalid 00:13:04.108 ************************************ 00:13:04.108 10:40:33 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:13:04.108 10:40:33 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:04.108 10:40:33 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:04.108 10:40:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:04.108 ************************************ 00:13:04.108 START TEST nvmf_abort 00:13:04.108 ************************************ 00:13:04.108 10:40:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:13:04.368 * Looking for test storage... 00:13:04.368 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.368 10:40:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.960 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:10.961 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:10.961 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@377 -- # modinfo irdma 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:10.961 Found net devices under 0000:af:00.0: cvl_0_0 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:10.961 Found net devices under 0000:af:00.1: cvl_0_1 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:10.961 10:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:13:10.961 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:10.961 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:10.961 altname enp175s0f0np0 00:13:10.961 altname ens801f0np0 00:13:10.961 inet 192.168.100.8/24 scope global cvl_0_0 00:13:10.961 valid_lft forever preferred_lft forever 00:13:10.961 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:10.961 valid_lft forever preferred_lft forever 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:10.961 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:13:10.961 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:10.961 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:10.961 altname enp175s0f1np1 00:13:10.961 altname ens801f1np1 00:13:10.961 inet 192.168.100.9/24 scope global cvl_0_1 00:13:10.961 valid_lft forever preferred_lft forever 00:13:10.961 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:10.962 valid_lft forever preferred_lft forever 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:10.962 192.168.100.9' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:10.962 192.168.100.9' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:10.962 192.168.100.9' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3995450 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3995450 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 3995450 ']' 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:10.962 10:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.962 [2024-06-10 10:40:39.252329] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:10.962 [2024-06-10 10:40:39.252375] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.962 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.962 [2024-06-10 10:40:39.313037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:10.962 [2024-06-10 10:40:39.387323] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.962 [2024-06-10 10:40:39.387360] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.962 [2024-06-10 10:40:39.387367] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.962 [2024-06-10 10:40:39.387373] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.962 [2024-06-10 10:40:39.387378] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.962 [2024-06-10 10:40:39.387476] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.962 [2024-06-10 10:40:39.387581] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.962 [2024-06-10 10:40:39.387583] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.221 [2024-06-10 10:40:40.120289] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x16a80d0/0x16a7710) succeed. 00:13:11.221 [2024-06-10 10:40:40.129036] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x16a9400/0x16a7c90) succeed. 00:13:11.221 [2024-06-10 10:40:40.129057] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.221 Malloc0 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.221 Delay0 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.221 [2024-06-10 10:40:40.201646] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.221 10:40:40 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:11.221 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.478 [2024-06-10 10:40:40.280142] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:13.447 Initializing NVMe Controllers 00:13:13.447 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:13:13.447 controller IO queue size 128 less than required 00:13:13.447 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:13.447 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:13.447 Initialization complete. Launching workers. 00:13:13.448 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 52208 00:13:13.448 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 52269, failed to submit 62 00:13:13.448 success 52209, unsuccess 60, failed 0 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:13.448 rmmod nvme_rdma 00:13:13.448 rmmod nvme_fabrics 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3995450 ']' 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3995450 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 3995450 ']' 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 3995450 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:13.448 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3995450 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3995450' 00:13:13.707 killing process with pid 3995450 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@968 -- # kill 3995450 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@973 -- # wait 3995450 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:13.707 00:13:13.707 real 0m9.602s 00:13:13.707 user 0m13.901s 00:13:13.707 sys 0m4.901s 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:13.707 10:40:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.707 ************************************ 00:13:13.707 END TEST nvmf_abort 00:13:13.707 ************************************ 00:13:13.966 10:40:42 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:13:13.966 10:40:42 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:13.966 10:40:42 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:13.966 10:40:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:13.966 ************************************ 00:13:13.966 START TEST nvmf_ns_hotplug_stress 00:13:13.966 ************************************ 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:13:13.966 * Looking for test storage... 00:13:13.966 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.966 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.967 10:40:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:20.531 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:20.531 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.531 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # modinfo irdma 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:20.532 Found net devices under 0000:af:00.0: cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:20.532 Found net devices under 0000:af:00.1: cvl_0_1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:13:20.532 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:20.532 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:20.532 altname enp175s0f0np0 00:13:20.532 altname ens801f0np0 00:13:20.532 inet 192.168.100.8/24 scope global cvl_0_0 00:13:20.532 valid_lft forever preferred_lft forever 00:13:20.532 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:20.532 valid_lft forever preferred_lft forever 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:13:20.532 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:20.532 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:20.532 altname enp175s0f1np1 00:13:20.532 altname ens801f1np1 00:13:20.532 inet 192.168.100.9/24 scope global cvl_0_1 00:13:20.532 valid_lft forever preferred_lft forever 00:13:20.532 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:20.532 valid_lft forever preferred_lft forever 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:20.532 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:20.533 192.168.100.9' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:20.533 192.168.100.9' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:20.533 192.168.100.9' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3999473 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3999473 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 3999473 ']' 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.533 10:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.533 [2024-06-10 10:40:48.904711] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:20.533 [2024-06-10 10:40:48.904755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.533 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.533 [2024-06-10 10:40:48.965435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.533 [2024-06-10 10:40:49.036942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.533 [2024-06-10 10:40:49.037003] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.533 [2024-06-10 10:40:49.037011] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.533 [2024-06-10 10:40:49.037017] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.533 [2024-06-10 10:40:49.037021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.533 [2024-06-10 10:40:49.037125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.533 [2024-06-10 10:40:49.037147] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.533 [2024-06-10 10:40:49.037146] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.791 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:20.791 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:13:20.791 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.791 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:20.791 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.791 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.791 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:20.791 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:21.049 [2024-06-10 10:40:49.910366] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x17a00d0/0x179f710) succeed. 00:13:21.049 [2024-06-10 10:40:49.919098] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x17a1400/0x179fc90) succeed. 00:13:21.049 [2024-06-10 10:40:49.919121] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:21.049 10:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.308 10:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:21.308 [2024-06-10 10:40:50.252523] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:21.308 10:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:21.567 10:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:21.825 Malloc0 00:13:21.825 10:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:21.825 Delay0 00:13:21.825 10:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.084 10:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:22.342 NULL1 00:13:22.342 10:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:22.602 10:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3999953 00:13:22.602 10:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:22.602 10:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:22.602 10:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.602 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.538 Read completed with error (sct=0, sc=11) 00:13:23.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.539 10:40:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.796 10:40:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:23.796 10:40:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:24.054 true 00:13:24.054 10:40:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:24.054 10:40:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.991 10:40:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.991 10:40:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:24.991 10:40:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:25.250 true 00:13:25.250 10:40:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:25.250 10:40:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.185 10:40:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.185 10:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:26.185 10:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:26.444 true 00:13:26.444 10:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:26.444 10:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.380 10:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.380 10:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:27.380 10:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:27.646 true 00:13:27.646 10:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:27.646 10:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 10:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.583 10:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:28.583 10:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:28.840 true 00:13:28.840 10:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:28.840 10:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.777 10:40:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.777 10:40:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:29.777 10:40:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:30.034 true 00:13:30.034 10:40:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:30.034 10:40:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.292 10:40:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.292 10:40:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:30.292 10:40:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:30.550 true 00:13:30.550 10:40:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:30.550 10:40:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.924 10:41:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.924 10:41:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:31.925 10:41:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:32.187 true 00:13:32.187 10:41:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:32.187 10:41:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.815 10:41:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.073 10:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:33.073 10:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:33.330 true 00:13:33.330 10:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:33.330 10:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.266 10:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.266 10:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:34.266 10:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:34.524 true 00:13:34.525 10:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:34.525 10:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.462 10:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.462 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.462 10:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:35.462 10:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:35.721 true 00:13:35.721 10:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:35.721 10:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.659 10:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.659 10:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:36.659 10:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:36.918 true 00:13:36.918 10:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:36.918 10:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.853 10:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.853 10:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:37.853 10:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:38.111 true 00:13:38.111 10:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:38.111 10:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.046 10:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.046 10:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:39.046 10:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:39.304 true 00:13:39.304 10:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:39.304 10:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.241 10:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.241 10:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:40.241 10:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:40.500 true 00:13:40.500 10:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:40.500 10:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.437 10:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.437 10:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:41.437 10:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:41.696 true 00:13:41.696 10:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:41.696 10:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.632 10:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.632 10:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:42.632 10:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:42.890 true 00:13:42.890 10:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:42.890 10:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.828 10:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.828 10:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:43.828 10:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:44.086 true 00:13:44.086 10:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:44.086 10:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.024 10:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.024 10:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:45.024 10:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:45.282 true 00:13:45.282 10:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:45.282 10:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.541 10:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.541 10:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:45.541 10:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:45.800 true 00:13:45.800 10:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:45.800 10:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.177 10:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.177 10:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:47.177 10:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:47.177 true 00:13:47.177 10:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:47.177 10:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.114 10:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.374 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.374 10:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:48.374 10:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:48.374 true 00:13:48.374 10:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:48.374 10:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.376 10:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.376 10:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:49.376 10:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:49.634 true 00:13:49.634 10:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:49.634 10:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.570 10:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.570 10:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:50.570 10:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:50.828 true 00:13:50.828 10:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:50.828 10:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.763 10:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.763 10:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:51.763 10:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:52.021 true 00:13:52.021 10:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:52.021 10:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.957 10:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.957 10:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:52.957 10:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:53.220 true 00:13:53.220 10:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:53.220 10:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.478 10:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.478 10:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:53.479 10:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:53.737 true 00:13:53.737 10:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:53.737 10:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.996 10:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.255 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:54.255 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:54.255 true 00:13:54.255 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:54.255 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.514 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.773 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:54.773 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:54.773 true 00:13:54.773 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:54.773 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.773 Initializing NVMe Controllers 00:13:54.773 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.773 Controller IO queue size 128, less than required. 00:13:54.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.773 Controller IO queue size 128, less than required. 00:13:54.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.773 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:54.773 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:54.773 Initialization complete. Launching workers. 00:13:54.773 ======================================================== 00:13:54.773 Latency(us) 00:13:54.773 Device Information : IOPS MiB/s Average min max 00:13:54.773 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5213.80 2.55 22297.17 995.25 1137807.99 00:13:54.773 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35573.13 17.37 3598.07 1434.57 295428.63 00:13:54.773 ======================================================== 00:13:54.773 Total : 40786.93 19.92 5988.37 995.25 1137807.99 00:13:54.773 00:13:55.032 10:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.291 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:55.291 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:55.291 true 00:13:55.291 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3999953 00:13:55.291 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3999953) - No such process 00:13:55.292 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3999953 00:13:55.292 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.550 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.809 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:55.809 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:55.809 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:55.809 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.809 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:55.809 null0 00:13:55.809 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.809 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.809 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:56.068 null1 00:13:56.068 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.068 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.068 10:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:56.327 null2 00:13:56.327 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.327 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.327 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:56.327 null3 00:13:56.586 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.586 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.586 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:56.586 null4 00:13:56.586 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.586 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.586 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:56.844 null5 00:13:56.844 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.844 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.844 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:56.844 null6 00:13:57.103 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.103 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.103 10:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:57.103 null7 00:13:57.103 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:57.103 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:57.103 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:57.103 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.103 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4005688 4005690 4005692 4005694 4005695 4005697 4005699 4005701 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.104 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.362 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.362 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.362 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.362 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.362 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.362 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.362 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.362 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.620 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.878 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.136 10:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.136 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.136 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.136 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.136 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.136 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.136 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.136 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.394 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.652 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.910 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.910 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.910 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.910 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.911 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.170 10:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.170 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.170 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.170 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.170 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.170 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.170 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.170 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.170 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.429 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.688 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.689 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.947 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.948 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.948 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.948 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.948 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.948 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.948 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.948 10:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.206 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.464 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.464 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.464 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.464 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.464 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.464 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.464 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.465 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.724 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.724 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.724 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.724 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.724 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.724 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.724 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.724 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:00.983 rmmod nvme_rdma 00:14:00.983 rmmod nvme_fabrics 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3999473 ']' 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3999473 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 3999473 ']' 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 3999473 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3999473 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3999473' 00:14:00.983 killing process with pid 3999473 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 3999473 00:14:00.983 10:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 3999473 00:14:01.243 10:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.243 10:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:01.243 00:14:01.243 real 0m47.324s 00:14:01.243 user 3m17.324s 00:14:01.243 sys 0m12.007s 00:14:01.243 10:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:01.243 10:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.243 ************************************ 00:14:01.243 END TEST nvmf_ns_hotplug_stress 00:14:01.243 ************************************ 00:14:01.243 10:41:30 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:01.243 10:41:30 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:01.243 10:41:30 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:01.243 10:41:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:01.243 ************************************ 00:14:01.243 START TEST nvmf_connect_stress 00:14:01.243 ************************************ 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:01.243 * Looking for test storage... 00:14:01.243 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.243 10:41:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.852 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:07.853 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:07.853 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@377 -- # modinfo irdma 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:07.853 Found net devices under 0000:af:00.0: cvl_0_0 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:07.853 Found net devices under 0000:af:00.1: cvl_0_1 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:14:07.853 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:07.853 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:07.853 altname enp175s0f0np0 00:14:07.853 altname ens801f0np0 00:14:07.853 inet 192.168.100.8/24 scope global cvl_0_0 00:14:07.853 valid_lft forever preferred_lft forever 00:14:07.853 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:07.853 valid_lft forever preferred_lft forever 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:07.853 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:14:07.853 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:07.853 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:07.853 altname enp175s0f1np1 00:14:07.853 altname ens801f1np1 00:14:07.853 inet 192.168.100.9/24 scope global cvl_0_1 00:14:07.853 valid_lft forever preferred_lft forever 00:14:07.854 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:07.854 valid_lft forever preferred_lft forever 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:07.854 192.168.100.9' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:07.854 192.168.100.9' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:07.854 192.168.100.9' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4010020 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4010020 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 4010020 ']' 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:07.854 10:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.854 [2024-06-10 10:41:36.542249] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:07.854 [2024-06-10 10:41:36.542291] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.854 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.854 [2024-06-10 10:41:36.600910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.854 [2024-06-10 10:41:36.675510] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.854 [2024-06-10 10:41:36.675549] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.854 [2024-06-10 10:41:36.675556] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.854 [2024-06-10 10:41:36.675562] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.854 [2024-06-10 10:41:36.675567] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.854 [2024-06-10 10:41:36.675669] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.854 [2024-06-10 10:41:36.675775] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.854 [2024-06-10 10:41:36.675776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.422 [2024-06-10 10:41:37.396433] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x117b0d0/0x117a710) succeed. 00:14:08.422 [2024-06-10 10:41:37.405066] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x117c400/0x117ac90) succeed. 00:14:08.422 [2024-06-10 10:41:37.405088] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.422 [2024-06-10 10:41:37.421281] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.422 NULL1 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4010102 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.422 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.681 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.682 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.940 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.940 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:08.940 10:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.940 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.940 10:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.199 10:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:09.199 10:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.199 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.199 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.767 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.767 10:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:09.767 10:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.767 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.767 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.025 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.025 10:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:10.025 10:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.025 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.025 10:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.284 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.284 10:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:10.284 10:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.284 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.284 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.543 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.543 10:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:10.543 10:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.543 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.543 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.801 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.801 10:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:10.801 10:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.801 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.801 10:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.368 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.368 10:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:11.368 10:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.368 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.368 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.626 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.626 10:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:11.626 10:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.626 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.626 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.885 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.885 10:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:11.885 10:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.885 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.885 10:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.144 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.144 10:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:12.144 10:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.144 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.144 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.402 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.402 10:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:12.402 10:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.402 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.402 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.969 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.969 10:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:12.969 10:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.969 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.969 10:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.232 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.232 10:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:13.232 10:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.232 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.232 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.492 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.493 10:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:13.493 10:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.493 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.493 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.751 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.751 10:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:13.751 10:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.751 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.751 10:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.318 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.318 10:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:14.318 10:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.318 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.318 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.576 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.576 10:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:14.576 10:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.576 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.576 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.855 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.855 10:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:14.855 10:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.855 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.855 10:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.114 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.114 10:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:15.114 10:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.114 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.114 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.374 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.374 10:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:15.374 10:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.374 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.374 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.941 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.941 10:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:15.941 10:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.941 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.941 10:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.200 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.200 10:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:16.200 10:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.200 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.200 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.459 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.459 10:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:16.459 10:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.459 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.459 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.717 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.717 10:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:16.717 10:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.717 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.717 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.975 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.975 10:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:16.975 10:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.975 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.975 10:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.543 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.543 10:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:17.543 10:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.543 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.543 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.802 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.802 10:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:17.802 10:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.802 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.802 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.061 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.061 10:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:18.061 10:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.061 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.061 10:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.320 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.320 10:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:18.320 10:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.320 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.320 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.579 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4010102 00:14:18.579 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4010102) - No such process 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4010102 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.579 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:18.579 rmmod nvme_rdma 00:14:18.838 rmmod nvme_fabrics 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4010020 ']' 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4010020 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 4010020 ']' 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 4010020 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4010020 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4010020' 00:14:18.838 killing process with pid 4010020 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 4010020 00:14:18.838 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 4010020 00:14:19.097 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.097 10:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:19.097 00:14:19.097 real 0m17.749s 00:14:19.097 user 0m41.503s 00:14:19.097 sys 0m8.560s 00:14:19.097 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:19.097 10:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.097 ************************************ 00:14:19.097 END TEST nvmf_connect_stress 00:14:19.097 ************************************ 00:14:19.097 10:41:47 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:14:19.097 10:41:47 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:19.097 10:41:47 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:19.097 10:41:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:19.097 ************************************ 00:14:19.097 START TEST nvmf_fused_ordering 00:14:19.097 ************************************ 00:14:19.097 10:41:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:14:19.097 * Looking for test storage... 00:14:19.097 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.097 10:41:48 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.098 10:41:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:25.704 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:25.704 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@377 -- # modinfo irdma 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:25.704 Found net devices under 0000:af:00.0: cvl_0_0 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:25.704 Found net devices under 0000:af:00.1: cvl_0_1 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:25.704 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:14:25.705 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:25.705 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:25.705 altname enp175s0f0np0 00:14:25.705 altname ens801f0np0 00:14:25.705 inet 192.168.100.8/24 scope global cvl_0_0 00:14:25.705 valid_lft forever preferred_lft forever 00:14:25.705 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:25.705 valid_lft forever preferred_lft forever 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:14:25.705 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:25.705 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:25.705 altname enp175s0f1np1 00:14:25.705 altname ens801f1np1 00:14:25.705 inet 192.168.100.9/24 scope global cvl_0_1 00:14:25.705 valid_lft forever preferred_lft forever 00:14:25.705 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:25.705 valid_lft forever preferred_lft forever 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:25.705 192.168.100.9' 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:25.705 192.168.100.9' 00:14:25.705 10:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:25.705 192.168.100.9' 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4015249 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4015249 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 4015249 ']' 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:25.705 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.705 [2024-06-10 10:41:54.091751] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:25.705 [2024-06-10 10:41:54.091795] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.705 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.705 [2024-06-10 10:41:54.154233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.705 [2024-06-10 10:41:54.227592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.705 [2024-06-10 10:41:54.227630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.705 [2024-06-10 10:41:54.227639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.705 [2024-06-10 10:41:54.227645] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.705 [2024-06-10 10:41:54.227650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.705 [2024-06-10 10:41:54.227669] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.964 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:25.964 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:14:25.964 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.964 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 [2024-06-10 10:41:54.938804] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xf49af0/0xf49130) succeed. 00:14:25.965 [2024-06-10 10:41:54.947137] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xf4ada0/0xf496b0) succeed. 00:14:25.965 [2024-06-10 10:41:54.947158] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 [2024-06-10 10:41:54.968589] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 NULL1 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.965 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:26.224 10:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.224 10:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:26.224 [2024-06-10 10:41:55.023067] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:26.224 [2024-06-10 10:41:55.023109] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015492 ] 00:14:26.224 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.224 Attached to nqn.2016-06.io.spdk:cnode1 00:14:26.224 Namespace ID: 1 size: 1GB 00:14:26.224 fused_ordering(0) 00:14:26.224 fused_ordering(1) 00:14:26.224 fused_ordering(2) 00:14:26.224 fused_ordering(3) 00:14:26.224 fused_ordering(4) 00:14:26.224 fused_ordering(5) 00:14:26.224 fused_ordering(6) 00:14:26.224 fused_ordering(7) 00:14:26.224 fused_ordering(8) 00:14:26.224 fused_ordering(9) 00:14:26.224 fused_ordering(10) 00:14:26.224 fused_ordering(11) 00:14:26.224 fused_ordering(12) 00:14:26.224 fused_ordering(13) 00:14:26.224 fused_ordering(14) 00:14:26.224 fused_ordering(15) 00:14:26.224 fused_ordering(16) 00:14:26.224 fused_ordering(17) 00:14:26.224 fused_ordering(18) 00:14:26.224 fused_ordering(19) 00:14:26.224 fused_ordering(20) 00:14:26.224 fused_ordering(21) 00:14:26.224 fused_ordering(22) 00:14:26.224 fused_ordering(23) 00:14:26.224 fused_ordering(24) 00:14:26.224 fused_ordering(25) 00:14:26.224 fused_ordering(26) 00:14:26.224 fused_ordering(27) 00:14:26.224 fused_ordering(28) 00:14:26.224 fused_ordering(29) 00:14:26.224 fused_ordering(30) 00:14:26.224 fused_ordering(31) 00:14:26.224 fused_ordering(32) 00:14:26.224 fused_ordering(33) 00:14:26.224 fused_ordering(34) 00:14:26.224 fused_ordering(35) 00:14:26.224 fused_ordering(36) 00:14:26.224 fused_ordering(37) 00:14:26.224 fused_ordering(38) 00:14:26.224 fused_ordering(39) 00:14:26.224 fused_ordering(40) 00:14:26.224 fused_ordering(41) 00:14:26.224 fused_ordering(42) 00:14:26.224 fused_ordering(43) 00:14:26.224 fused_ordering(44) 00:14:26.224 fused_ordering(45) 00:14:26.224 fused_ordering(46) 00:14:26.224 fused_ordering(47) 00:14:26.224 fused_ordering(48) 00:14:26.224 fused_ordering(49) 00:14:26.224 fused_ordering(50) 00:14:26.224 fused_ordering(51) 00:14:26.224 fused_ordering(52) 00:14:26.224 fused_ordering(53) 00:14:26.224 fused_ordering(54) 00:14:26.224 fused_ordering(55) 00:14:26.224 fused_ordering(56) 00:14:26.224 fused_ordering(57) 00:14:26.224 fused_ordering(58) 00:14:26.224 fused_ordering(59) 00:14:26.224 fused_ordering(60) 00:14:26.224 fused_ordering(61) 00:14:26.224 fused_ordering(62) 00:14:26.224 fused_ordering(63) 00:14:26.224 fused_ordering(64) 00:14:26.224 fused_ordering(65) 00:14:26.224 fused_ordering(66) 00:14:26.224 fused_ordering(67) 00:14:26.224 fused_ordering(68) 00:14:26.224 fused_ordering(69) 00:14:26.224 fused_ordering(70) 00:14:26.224 fused_ordering(71) 00:14:26.224 fused_ordering(72) 00:14:26.224 fused_ordering(73) 00:14:26.224 fused_ordering(74) 00:14:26.224 fused_ordering(75) 00:14:26.224 fused_ordering(76) 00:14:26.224 fused_ordering(77) 00:14:26.224 fused_ordering(78) 00:14:26.224 fused_ordering(79) 00:14:26.224 fused_ordering(80) 00:14:26.224 fused_ordering(81) 00:14:26.224 fused_ordering(82) 00:14:26.224 fused_ordering(83) 00:14:26.224 fused_ordering(84) 00:14:26.224 fused_ordering(85) 00:14:26.224 fused_ordering(86) 00:14:26.224 fused_ordering(87) 00:14:26.224 fused_ordering(88) 00:14:26.224 fused_ordering(89) 00:14:26.224 fused_ordering(90) 00:14:26.224 fused_ordering(91) 00:14:26.224 fused_ordering(92) 00:14:26.224 fused_ordering(93) 00:14:26.224 fused_ordering(94) 00:14:26.224 fused_ordering(95) 00:14:26.224 fused_ordering(96) 00:14:26.224 fused_ordering(97) 00:14:26.224 fused_ordering(98) 00:14:26.224 fused_ordering(99) 00:14:26.224 fused_ordering(100) 00:14:26.224 fused_ordering(101) 00:14:26.224 fused_ordering(102) 00:14:26.224 fused_ordering(103) 00:14:26.224 fused_ordering(104) 00:14:26.224 fused_ordering(105) 00:14:26.224 fused_ordering(106) 00:14:26.224 fused_ordering(107) 00:14:26.224 fused_ordering(108) 00:14:26.224 fused_ordering(109) 00:14:26.224 fused_ordering(110) 00:14:26.224 fused_ordering(111) 00:14:26.224 fused_ordering(112) 00:14:26.224 fused_ordering(113) 00:14:26.224 fused_ordering(114) 00:14:26.224 fused_ordering(115) 00:14:26.224 fused_ordering(116) 00:14:26.224 fused_ordering(117) 00:14:26.224 fused_ordering(118) 00:14:26.224 fused_ordering(119) 00:14:26.224 fused_ordering(120) 00:14:26.224 fused_ordering(121) 00:14:26.224 fused_ordering(122) 00:14:26.224 fused_ordering(123) 00:14:26.224 fused_ordering(124) 00:14:26.224 fused_ordering(125) 00:14:26.224 fused_ordering(126) 00:14:26.224 fused_ordering(127) 00:14:26.224 fused_ordering(128) 00:14:26.224 fused_ordering(129) 00:14:26.224 fused_ordering(130) 00:14:26.224 fused_ordering(131) 00:14:26.224 fused_ordering(132) 00:14:26.224 fused_ordering(133) 00:14:26.224 fused_ordering(134) 00:14:26.224 fused_ordering(135) 00:14:26.224 fused_ordering(136) 00:14:26.224 fused_ordering(137) 00:14:26.224 fused_ordering(138) 00:14:26.224 fused_ordering(139) 00:14:26.224 fused_ordering(140) 00:14:26.224 fused_ordering(141) 00:14:26.224 fused_ordering(142) 00:14:26.224 fused_ordering(143) 00:14:26.224 fused_ordering(144) 00:14:26.224 fused_ordering(145) 00:14:26.224 fused_ordering(146) 00:14:26.224 fused_ordering(147) 00:14:26.224 fused_ordering(148) 00:14:26.224 fused_ordering(149) 00:14:26.224 fused_ordering(150) 00:14:26.224 fused_ordering(151) 00:14:26.224 fused_ordering(152) 00:14:26.225 fused_ordering(153) 00:14:26.225 fused_ordering(154) 00:14:26.225 fused_ordering(155) 00:14:26.225 fused_ordering(156) 00:14:26.225 fused_ordering(157) 00:14:26.225 fused_ordering(158) 00:14:26.225 fused_ordering(159) 00:14:26.225 fused_ordering(160) 00:14:26.225 fused_ordering(161) 00:14:26.225 fused_ordering(162) 00:14:26.225 fused_ordering(163) 00:14:26.225 fused_ordering(164) 00:14:26.225 fused_ordering(165) 00:14:26.225 fused_ordering(166) 00:14:26.225 fused_ordering(167) 00:14:26.225 fused_ordering(168) 00:14:26.225 fused_ordering(169) 00:14:26.225 fused_ordering(170) 00:14:26.225 fused_ordering(171) 00:14:26.225 fused_ordering(172) 00:14:26.225 fused_ordering(173) 00:14:26.225 fused_ordering(174) 00:14:26.225 fused_ordering(175) 00:14:26.225 fused_ordering(176) 00:14:26.225 fused_ordering(177) 00:14:26.225 fused_ordering(178) 00:14:26.225 fused_ordering(179) 00:14:26.225 fused_ordering(180) 00:14:26.225 fused_ordering(181) 00:14:26.225 fused_ordering(182) 00:14:26.225 fused_ordering(183) 00:14:26.225 fused_ordering(184) 00:14:26.225 fused_ordering(185) 00:14:26.225 fused_ordering(186) 00:14:26.225 fused_ordering(187) 00:14:26.225 fused_ordering(188) 00:14:26.225 fused_ordering(189) 00:14:26.225 fused_ordering(190) 00:14:26.225 fused_ordering(191) 00:14:26.225 fused_ordering(192) 00:14:26.225 fused_ordering(193) 00:14:26.225 fused_ordering(194) 00:14:26.225 fused_ordering(195) 00:14:26.225 fused_ordering(196) 00:14:26.225 fused_ordering(197) 00:14:26.225 fused_ordering(198) 00:14:26.225 fused_ordering(199) 00:14:26.225 fused_ordering(200) 00:14:26.225 fused_ordering(201) 00:14:26.225 fused_ordering(202) 00:14:26.225 fused_ordering(203) 00:14:26.225 fused_ordering(204) 00:14:26.225 fused_ordering(205) 00:14:26.485 fused_ordering(206) 00:14:26.485 fused_ordering(207) 00:14:26.485 fused_ordering(208) 00:14:26.485 fused_ordering(209) 00:14:26.485 fused_ordering(210) 00:14:26.485 fused_ordering(211) 00:14:26.485 fused_ordering(212) 00:14:26.485 fused_ordering(213) 00:14:26.485 fused_ordering(214) 00:14:26.485 fused_ordering(215) 00:14:26.485 fused_ordering(216) 00:14:26.485 fused_ordering(217) 00:14:26.485 fused_ordering(218) 00:14:26.485 fused_ordering(219) 00:14:26.485 fused_ordering(220) 00:14:26.485 fused_ordering(221) 00:14:26.485 fused_ordering(222) 00:14:26.485 fused_ordering(223) 00:14:26.485 fused_ordering(224) 00:14:26.485 fused_ordering(225) 00:14:26.485 fused_ordering(226) 00:14:26.485 fused_ordering(227) 00:14:26.485 fused_ordering(228) 00:14:26.485 fused_ordering(229) 00:14:26.485 fused_ordering(230) 00:14:26.485 fused_ordering(231) 00:14:26.485 fused_ordering(232) 00:14:26.485 fused_ordering(233) 00:14:26.485 fused_ordering(234) 00:14:26.485 fused_ordering(235) 00:14:26.485 fused_ordering(236) 00:14:26.485 fused_ordering(237) 00:14:26.485 fused_ordering(238) 00:14:26.485 fused_ordering(239) 00:14:26.485 fused_ordering(240) 00:14:26.485 fused_ordering(241) 00:14:26.485 fused_ordering(242) 00:14:26.485 fused_ordering(243) 00:14:26.485 fused_ordering(244) 00:14:26.485 fused_ordering(245) 00:14:26.485 fused_ordering(246) 00:14:26.485 fused_ordering(247) 00:14:26.485 fused_ordering(248) 00:14:26.485 fused_ordering(249) 00:14:26.485 fused_ordering(250) 00:14:26.485 fused_ordering(251) 00:14:26.485 fused_ordering(252) 00:14:26.485 fused_ordering(253) 00:14:26.485 fused_ordering(254) 00:14:26.485 fused_ordering(255) 00:14:26.485 fused_ordering(256) 00:14:26.485 fused_ordering(257) 00:14:26.485 fused_ordering(258) 00:14:26.485 fused_ordering(259) 00:14:26.485 fused_ordering(260) 00:14:26.485 fused_ordering(261) 00:14:26.485 fused_ordering(262) 00:14:26.485 fused_ordering(263) 00:14:26.485 fused_ordering(264) 00:14:26.485 fused_ordering(265) 00:14:26.485 fused_ordering(266) 00:14:26.485 fused_ordering(267) 00:14:26.485 fused_ordering(268) 00:14:26.485 fused_ordering(269) 00:14:26.485 fused_ordering(270) 00:14:26.485 fused_ordering(271) 00:14:26.485 fused_ordering(272) 00:14:26.485 fused_ordering(273) 00:14:26.485 fused_ordering(274) 00:14:26.485 fused_ordering(275) 00:14:26.485 fused_ordering(276) 00:14:26.485 fused_ordering(277) 00:14:26.485 fused_ordering(278) 00:14:26.485 fused_ordering(279) 00:14:26.485 fused_ordering(280) 00:14:26.485 fused_ordering(281) 00:14:26.485 fused_ordering(282) 00:14:26.485 fused_ordering(283) 00:14:26.485 fused_ordering(284) 00:14:26.485 fused_ordering(285) 00:14:26.485 fused_ordering(286) 00:14:26.485 fused_ordering(287) 00:14:26.485 fused_ordering(288) 00:14:26.485 fused_ordering(289) 00:14:26.485 fused_ordering(290) 00:14:26.485 fused_ordering(291) 00:14:26.485 fused_ordering(292) 00:14:26.485 fused_ordering(293) 00:14:26.485 fused_ordering(294) 00:14:26.485 fused_ordering(295) 00:14:26.485 fused_ordering(296) 00:14:26.485 fused_ordering(297) 00:14:26.485 fused_ordering(298) 00:14:26.485 fused_ordering(299) 00:14:26.485 fused_ordering(300) 00:14:26.485 fused_ordering(301) 00:14:26.485 fused_ordering(302) 00:14:26.485 fused_ordering(303) 00:14:26.485 fused_ordering(304) 00:14:26.485 fused_ordering(305) 00:14:26.485 fused_ordering(306) 00:14:26.485 fused_ordering(307) 00:14:26.485 fused_ordering(308) 00:14:26.485 fused_ordering(309) 00:14:26.485 fused_ordering(310) 00:14:26.485 fused_ordering(311) 00:14:26.485 fused_ordering(312) 00:14:26.485 fused_ordering(313) 00:14:26.485 fused_ordering(314) 00:14:26.485 fused_ordering(315) 00:14:26.485 fused_ordering(316) 00:14:26.485 fused_ordering(317) 00:14:26.485 fused_ordering(318) 00:14:26.485 fused_ordering(319) 00:14:26.485 fused_ordering(320) 00:14:26.485 fused_ordering(321) 00:14:26.485 fused_ordering(322) 00:14:26.485 fused_ordering(323) 00:14:26.485 fused_ordering(324) 00:14:26.485 fused_ordering(325) 00:14:26.485 fused_ordering(326) 00:14:26.485 fused_ordering(327) 00:14:26.485 fused_ordering(328) 00:14:26.485 fused_ordering(329) 00:14:26.485 fused_ordering(330) 00:14:26.485 fused_ordering(331) 00:14:26.485 fused_ordering(332) 00:14:26.485 fused_ordering(333) 00:14:26.485 fused_ordering(334) 00:14:26.485 fused_ordering(335) 00:14:26.485 fused_ordering(336) 00:14:26.485 fused_ordering(337) 00:14:26.485 fused_ordering(338) 00:14:26.485 fused_ordering(339) 00:14:26.485 fused_ordering(340) 00:14:26.485 fused_ordering(341) 00:14:26.485 fused_ordering(342) 00:14:26.485 fused_ordering(343) 00:14:26.485 fused_ordering(344) 00:14:26.485 fused_ordering(345) 00:14:26.485 fused_ordering(346) 00:14:26.485 fused_ordering(347) 00:14:26.485 fused_ordering(348) 00:14:26.485 fused_ordering(349) 00:14:26.485 fused_ordering(350) 00:14:26.485 fused_ordering(351) 00:14:26.485 fused_ordering(352) 00:14:26.485 fused_ordering(353) 00:14:26.485 fused_ordering(354) 00:14:26.485 fused_ordering(355) 00:14:26.485 fused_ordering(356) 00:14:26.485 fused_ordering(357) 00:14:26.485 fused_ordering(358) 00:14:26.485 fused_ordering(359) 00:14:26.485 fused_ordering(360) 00:14:26.485 fused_ordering(361) 00:14:26.485 fused_ordering(362) 00:14:26.485 fused_ordering(363) 00:14:26.485 fused_ordering(364) 00:14:26.485 fused_ordering(365) 00:14:26.485 fused_ordering(366) 00:14:26.485 fused_ordering(367) 00:14:26.485 fused_ordering(368) 00:14:26.485 fused_ordering(369) 00:14:26.485 fused_ordering(370) 00:14:26.485 fused_ordering(371) 00:14:26.485 fused_ordering(372) 00:14:26.485 fused_ordering(373) 00:14:26.485 fused_ordering(374) 00:14:26.485 fused_ordering(375) 00:14:26.485 fused_ordering(376) 00:14:26.485 fused_ordering(377) 00:14:26.485 fused_ordering(378) 00:14:26.485 fused_ordering(379) 00:14:26.485 fused_ordering(380) 00:14:26.485 fused_ordering(381) 00:14:26.485 fused_ordering(382) 00:14:26.485 fused_ordering(383) 00:14:26.485 fused_ordering(384) 00:14:26.485 fused_ordering(385) 00:14:26.485 fused_ordering(386) 00:14:26.485 fused_ordering(387) 00:14:26.485 fused_ordering(388) 00:14:26.485 fused_ordering(389) 00:14:26.485 fused_ordering(390) 00:14:26.485 fused_ordering(391) 00:14:26.485 fused_ordering(392) 00:14:26.485 fused_ordering(393) 00:14:26.485 fused_ordering(394) 00:14:26.485 fused_ordering(395) 00:14:26.485 fused_ordering(396) 00:14:26.485 fused_ordering(397) 00:14:26.485 fused_ordering(398) 00:14:26.485 fused_ordering(399) 00:14:26.485 fused_ordering(400) 00:14:26.485 fused_ordering(401) 00:14:26.485 fused_ordering(402) 00:14:26.485 fused_ordering(403) 00:14:26.485 fused_ordering(404) 00:14:26.485 fused_ordering(405) 00:14:26.485 fused_ordering(406) 00:14:26.485 fused_ordering(407) 00:14:26.485 fused_ordering(408) 00:14:26.485 fused_ordering(409) 00:14:26.485 fused_ordering(410) 00:14:26.485 fused_ordering(411) 00:14:26.485 fused_ordering(412) 00:14:26.485 fused_ordering(413) 00:14:26.485 fused_ordering(414) 00:14:26.485 fused_ordering(415) 00:14:26.485 fused_ordering(416) 00:14:26.485 fused_ordering(417) 00:14:26.485 fused_ordering(418) 00:14:26.485 fused_ordering(419) 00:14:26.485 fused_ordering(420) 00:14:26.485 fused_ordering(421) 00:14:26.485 fused_ordering(422) 00:14:26.485 fused_ordering(423) 00:14:26.485 fused_ordering(424) 00:14:26.485 fused_ordering(425) 00:14:26.485 fused_ordering(426) 00:14:26.485 fused_ordering(427) 00:14:26.485 fused_ordering(428) 00:14:26.485 fused_ordering(429) 00:14:26.485 fused_ordering(430) 00:14:26.485 fused_ordering(431) 00:14:26.485 fused_ordering(432) 00:14:26.485 fused_ordering(433) 00:14:26.485 fused_ordering(434) 00:14:26.485 fused_ordering(435) 00:14:26.485 fused_ordering(436) 00:14:26.485 fused_ordering(437) 00:14:26.485 fused_ordering(438) 00:14:26.485 fused_ordering(439) 00:14:26.485 fused_ordering(440) 00:14:26.485 fused_ordering(441) 00:14:26.485 fused_ordering(442) 00:14:26.485 fused_ordering(443) 00:14:26.485 fused_ordering(444) 00:14:26.485 fused_ordering(445) 00:14:26.485 fused_ordering(446) 00:14:26.485 fused_ordering(447) 00:14:26.485 fused_ordering(448) 00:14:26.485 fused_ordering(449) 00:14:26.485 fused_ordering(450) 00:14:26.485 fused_ordering(451) 00:14:26.485 fused_ordering(452) 00:14:26.485 fused_ordering(453) 00:14:26.486 fused_ordering(454) 00:14:26.486 fused_ordering(455) 00:14:26.486 fused_ordering(456) 00:14:26.486 fused_ordering(457) 00:14:26.486 fused_ordering(458) 00:14:26.486 fused_ordering(459) 00:14:26.486 fused_ordering(460) 00:14:26.486 fused_ordering(461) 00:14:26.486 fused_ordering(462) 00:14:26.486 fused_ordering(463) 00:14:26.486 fused_ordering(464) 00:14:26.486 fused_ordering(465) 00:14:26.486 fused_ordering(466) 00:14:26.486 fused_ordering(467) 00:14:26.486 fused_ordering(468) 00:14:26.486 fused_ordering(469) 00:14:26.486 fused_ordering(470) 00:14:26.486 fused_ordering(471) 00:14:26.486 fused_ordering(472) 00:14:26.486 fused_ordering(473) 00:14:26.486 fused_ordering(474) 00:14:26.486 fused_ordering(475) 00:14:26.486 fused_ordering(476) 00:14:26.486 fused_ordering(477) 00:14:26.486 fused_ordering(478) 00:14:26.486 fused_ordering(479) 00:14:26.486 fused_ordering(480) 00:14:26.486 fused_ordering(481) 00:14:26.486 fused_ordering(482) 00:14:26.486 fused_ordering(483) 00:14:26.486 fused_ordering(484) 00:14:26.486 fused_ordering(485) 00:14:26.486 fused_ordering(486) 00:14:26.486 fused_ordering(487) 00:14:26.486 fused_ordering(488) 00:14:26.486 fused_ordering(489) 00:14:26.486 fused_ordering(490) 00:14:26.486 fused_ordering(491) 00:14:26.486 fused_ordering(492) 00:14:26.486 fused_ordering(493) 00:14:26.486 fused_ordering(494) 00:14:26.486 fused_ordering(495) 00:14:26.486 fused_ordering(496) 00:14:26.486 fused_ordering(497) 00:14:26.486 fused_ordering(498) 00:14:26.486 fused_ordering(499) 00:14:26.486 fused_ordering(500) 00:14:26.486 fused_ordering(501) 00:14:26.486 fused_ordering(502) 00:14:26.486 fused_ordering(503) 00:14:26.486 fused_ordering(504) 00:14:26.486 fused_ordering(505) 00:14:26.486 fused_ordering(506) 00:14:26.486 fused_ordering(507) 00:14:26.486 fused_ordering(508) 00:14:26.486 fused_ordering(509) 00:14:26.486 fused_ordering(510) 00:14:26.486 fused_ordering(511) 00:14:26.486 fused_ordering(512) 00:14:26.486 fused_ordering(513) 00:14:26.486 fused_ordering(514) 00:14:26.486 fused_ordering(515) 00:14:26.486 fused_ordering(516) 00:14:26.486 fused_ordering(517) 00:14:26.486 fused_ordering(518) 00:14:26.486 fused_ordering(519) 00:14:26.486 fused_ordering(520) 00:14:26.486 fused_ordering(521) 00:14:26.486 fused_ordering(522) 00:14:26.486 fused_ordering(523) 00:14:26.486 fused_ordering(524) 00:14:26.486 fused_ordering(525) 00:14:26.486 fused_ordering(526) 00:14:26.486 fused_ordering(527) 00:14:26.486 fused_ordering(528) 00:14:26.486 fused_ordering(529) 00:14:26.486 fused_ordering(530) 00:14:26.486 fused_ordering(531) 00:14:26.486 fused_ordering(532) 00:14:26.486 fused_ordering(533) 00:14:26.486 fused_ordering(534) 00:14:26.486 fused_ordering(535) 00:14:26.486 fused_ordering(536) 00:14:26.486 fused_ordering(537) 00:14:26.486 fused_ordering(538) 00:14:26.486 fused_ordering(539) 00:14:26.486 fused_ordering(540) 00:14:26.486 fused_ordering(541) 00:14:26.486 fused_ordering(542) 00:14:26.486 fused_ordering(543) 00:14:26.486 fused_ordering(544) 00:14:26.486 fused_ordering(545) 00:14:26.486 fused_ordering(546) 00:14:26.486 fused_ordering(547) 00:14:26.486 fused_ordering(548) 00:14:26.486 fused_ordering(549) 00:14:26.486 fused_ordering(550) 00:14:26.486 fused_ordering(551) 00:14:26.486 fused_ordering(552) 00:14:26.486 fused_ordering(553) 00:14:26.486 fused_ordering(554) 00:14:26.486 fused_ordering(555) 00:14:26.486 fused_ordering(556) 00:14:26.486 fused_ordering(557) 00:14:26.486 fused_ordering(558) 00:14:26.486 fused_ordering(559) 00:14:26.486 fused_ordering(560) 00:14:26.486 fused_ordering(561) 00:14:26.486 fused_ordering(562) 00:14:26.486 fused_ordering(563) 00:14:26.486 fused_ordering(564) 00:14:26.486 fused_ordering(565) 00:14:26.486 fused_ordering(566) 00:14:26.486 fused_ordering(567) 00:14:26.486 fused_ordering(568) 00:14:26.486 fused_ordering(569) 00:14:26.486 fused_ordering(570) 00:14:26.486 fused_ordering(571) 00:14:26.486 fused_ordering(572) 00:14:26.486 fused_ordering(573) 00:14:26.486 fused_ordering(574) 00:14:26.486 fused_ordering(575) 00:14:26.486 fused_ordering(576) 00:14:26.486 fused_ordering(577) 00:14:26.486 fused_ordering(578) 00:14:26.486 fused_ordering(579) 00:14:26.486 fused_ordering(580) 00:14:26.486 fused_ordering(581) 00:14:26.486 fused_ordering(582) 00:14:26.486 fused_ordering(583) 00:14:26.486 fused_ordering(584) 00:14:26.486 fused_ordering(585) 00:14:26.486 fused_ordering(586) 00:14:26.486 fused_ordering(587) 00:14:26.486 fused_ordering(588) 00:14:26.486 fused_ordering(589) 00:14:26.486 fused_ordering(590) 00:14:26.486 fused_ordering(591) 00:14:26.486 fused_ordering(592) 00:14:26.486 fused_ordering(593) 00:14:26.486 fused_ordering(594) 00:14:26.486 fused_ordering(595) 00:14:26.486 fused_ordering(596) 00:14:26.486 fused_ordering(597) 00:14:26.486 fused_ordering(598) 00:14:26.486 fused_ordering(599) 00:14:26.486 fused_ordering(600) 00:14:26.486 fused_ordering(601) 00:14:26.486 fused_ordering(602) 00:14:26.486 fused_ordering(603) 00:14:26.486 fused_ordering(604) 00:14:26.486 fused_ordering(605) 00:14:26.486 fused_ordering(606) 00:14:26.486 fused_ordering(607) 00:14:26.486 fused_ordering(608) 00:14:26.486 fused_ordering(609) 00:14:26.486 fused_ordering(610) 00:14:26.486 fused_ordering(611) 00:14:26.486 fused_ordering(612) 00:14:26.486 fused_ordering(613) 00:14:26.486 fused_ordering(614) 00:14:26.486 fused_ordering(615) 00:14:26.486 fused_ordering(616) 00:14:26.486 fused_ordering(617) 00:14:26.486 fused_ordering(618) 00:14:26.486 fused_ordering(619) 00:14:26.486 fused_ordering(620) 00:14:26.486 fused_ordering(621) 00:14:26.486 fused_ordering(622) 00:14:26.486 fused_ordering(623) 00:14:26.486 fused_ordering(624) 00:14:26.486 fused_ordering(625) 00:14:26.486 fused_ordering(626) 00:14:26.486 fused_ordering(627) 00:14:26.486 fused_ordering(628) 00:14:26.486 fused_ordering(629) 00:14:26.486 fused_ordering(630) 00:14:26.486 fused_ordering(631) 00:14:26.486 fused_ordering(632) 00:14:26.486 fused_ordering(633) 00:14:26.486 fused_ordering(634) 00:14:26.486 fused_ordering(635) 00:14:26.486 fused_ordering(636) 00:14:26.486 fused_ordering(637) 00:14:26.486 fused_ordering(638) 00:14:26.486 fused_ordering(639) 00:14:26.486 fused_ordering(640) 00:14:26.486 fused_ordering(641) 00:14:26.486 fused_ordering(642) 00:14:26.486 fused_ordering(643) 00:14:26.486 fused_ordering(644) 00:14:26.486 fused_ordering(645) 00:14:26.486 fused_ordering(646) 00:14:26.486 fused_ordering(647) 00:14:26.486 fused_ordering(648) 00:14:26.486 fused_ordering(649) 00:14:26.486 fused_ordering(650) 00:14:26.486 fused_ordering(651) 00:14:26.486 fused_ordering(652) 00:14:26.486 fused_ordering(653) 00:14:26.486 fused_ordering(654) 00:14:26.486 fused_ordering(655) 00:14:26.486 fused_ordering(656) 00:14:26.486 fused_ordering(657) 00:14:26.486 fused_ordering(658) 00:14:26.486 fused_ordering(659) 00:14:26.486 fused_ordering(660) 00:14:26.486 fused_ordering(661) 00:14:26.486 fused_ordering(662) 00:14:26.486 fused_ordering(663) 00:14:26.486 fused_ordering(664) 00:14:26.486 fused_ordering(665) 00:14:26.486 fused_ordering(666) 00:14:26.486 fused_ordering(667) 00:14:26.486 fused_ordering(668) 00:14:26.486 fused_ordering(669) 00:14:26.486 fused_ordering(670) 00:14:26.486 fused_ordering(671) 00:14:26.486 fused_ordering(672) 00:14:26.486 fused_ordering(673) 00:14:26.486 fused_ordering(674) 00:14:26.486 fused_ordering(675) 00:14:26.486 fused_ordering(676) 00:14:26.486 fused_ordering(677) 00:14:26.486 fused_ordering(678) 00:14:26.486 fused_ordering(679) 00:14:26.486 fused_ordering(680) 00:14:26.486 fused_ordering(681) 00:14:26.486 fused_ordering(682) 00:14:26.486 fused_ordering(683) 00:14:26.486 fused_ordering(684) 00:14:26.486 fused_ordering(685) 00:14:26.486 fused_ordering(686) 00:14:26.486 fused_ordering(687) 00:14:26.486 fused_ordering(688) 00:14:26.486 fused_ordering(689) 00:14:26.486 fused_ordering(690) 00:14:26.486 fused_ordering(691) 00:14:26.486 fused_ordering(692) 00:14:26.486 fused_ordering(693) 00:14:26.486 fused_ordering(694) 00:14:26.486 fused_ordering(695) 00:14:26.486 fused_ordering(696) 00:14:26.486 fused_ordering(697) 00:14:26.486 fused_ordering(698) 00:14:26.486 fused_ordering(699) 00:14:26.486 fused_ordering(700) 00:14:26.486 fused_ordering(701) 00:14:26.486 fused_ordering(702) 00:14:26.486 fused_ordering(703) 00:14:26.486 fused_ordering(704) 00:14:26.486 fused_ordering(705) 00:14:26.486 fused_ordering(706) 00:14:26.486 fused_ordering(707) 00:14:26.486 fused_ordering(708) 00:14:26.486 fused_ordering(709) 00:14:26.486 fused_ordering(710) 00:14:26.486 fused_ordering(711) 00:14:26.486 fused_ordering(712) 00:14:26.486 fused_ordering(713) 00:14:26.486 fused_ordering(714) 00:14:26.486 fused_ordering(715) 00:14:26.486 fused_ordering(716) 00:14:26.486 fused_ordering(717) 00:14:26.486 fused_ordering(718) 00:14:26.486 fused_ordering(719) 00:14:26.486 fused_ordering(720) 00:14:26.486 fused_ordering(721) 00:14:26.486 fused_ordering(722) 00:14:26.486 fused_ordering(723) 00:14:26.486 fused_ordering(724) 00:14:26.486 fused_ordering(725) 00:14:26.486 fused_ordering(726) 00:14:26.486 fused_ordering(727) 00:14:26.486 fused_ordering(728) 00:14:26.486 fused_ordering(729) 00:14:26.487 fused_ordering(730) 00:14:26.487 fused_ordering(731) 00:14:26.487 fused_ordering(732) 00:14:26.487 fused_ordering(733) 00:14:26.487 fused_ordering(734) 00:14:26.487 fused_ordering(735) 00:14:26.487 fused_ordering(736) 00:14:26.487 fused_ordering(737) 00:14:26.487 fused_ordering(738) 00:14:26.487 fused_ordering(739) 00:14:26.487 fused_ordering(740) 00:14:26.487 fused_ordering(741) 00:14:26.487 fused_ordering(742) 00:14:26.487 fused_ordering(743) 00:14:26.487 fused_ordering(744) 00:14:26.487 fused_ordering(745) 00:14:26.487 fused_ordering(746) 00:14:26.487 fused_ordering(747) 00:14:26.487 fused_ordering(748) 00:14:26.487 fused_ordering(749) 00:14:26.487 fused_ordering(750) 00:14:26.487 fused_ordering(751) 00:14:26.487 fused_ordering(752) 00:14:26.487 fused_ordering(753) 00:14:26.487 fused_ordering(754) 00:14:26.487 fused_ordering(755) 00:14:26.487 fused_ordering(756) 00:14:26.487 fused_ordering(757) 00:14:26.487 fused_ordering(758) 00:14:26.487 fused_ordering(759) 00:14:26.487 fused_ordering(760) 00:14:26.487 fused_ordering(761) 00:14:26.487 fused_ordering(762) 00:14:26.487 fused_ordering(763) 00:14:26.487 fused_ordering(764) 00:14:26.487 fused_ordering(765) 00:14:26.487 fused_ordering(766) 00:14:26.487 fused_ordering(767) 00:14:26.487 fused_ordering(768) 00:14:26.487 fused_ordering(769) 00:14:26.487 fused_ordering(770) 00:14:26.487 fused_ordering(771) 00:14:26.487 fused_ordering(772) 00:14:26.487 fused_ordering(773) 00:14:26.487 fused_ordering(774) 00:14:26.487 fused_ordering(775) 00:14:26.487 fused_ordering(776) 00:14:26.487 fused_ordering(777) 00:14:26.487 fused_ordering(778) 00:14:26.487 fused_ordering(779) 00:14:26.487 fused_ordering(780) 00:14:26.487 fused_ordering(781) 00:14:26.487 fused_ordering(782) 00:14:26.487 fused_ordering(783) 00:14:26.487 fused_ordering(784) 00:14:26.487 fused_ordering(785) 00:14:26.487 fused_ordering(786) 00:14:26.487 fused_ordering(787) 00:14:26.487 fused_ordering(788) 00:14:26.487 fused_ordering(789) 00:14:26.487 fused_ordering(790) 00:14:26.487 fused_ordering(791) 00:14:26.487 fused_ordering(792) 00:14:26.487 fused_ordering(793) 00:14:26.487 fused_ordering(794) 00:14:26.487 fused_ordering(795) 00:14:26.487 fused_ordering(796) 00:14:26.487 fused_ordering(797) 00:14:26.487 fused_ordering(798) 00:14:26.487 fused_ordering(799) 00:14:26.487 fused_ordering(800) 00:14:26.487 fused_ordering(801) 00:14:26.487 fused_ordering(802) 00:14:26.487 fused_ordering(803) 00:14:26.487 fused_ordering(804) 00:14:26.487 fused_ordering(805) 00:14:26.487 fused_ordering(806) 00:14:26.487 fused_ordering(807) 00:14:26.487 fused_ordering(808) 00:14:26.487 fused_ordering(809) 00:14:26.487 fused_ordering(810) 00:14:26.487 fused_ordering(811) 00:14:26.487 fused_ordering(812) 00:14:26.487 fused_ordering(813) 00:14:26.487 fused_ordering(814) 00:14:26.487 fused_ordering(815) 00:14:26.487 fused_ordering(816) 00:14:26.487 fused_ordering(817) 00:14:26.487 fused_ordering(818) 00:14:26.487 fused_ordering(819) 00:14:26.487 fused_ordering(820) 00:14:26.746 fused_ordering(821) 00:14:26.746 fused_ordering(822) 00:14:26.746 fused_ordering(823) 00:14:26.746 fused_ordering(824) 00:14:26.746 fused_ordering(825) 00:14:26.746 fused_ordering(826) 00:14:26.746 fused_ordering(827) 00:14:26.746 fused_ordering(828) 00:14:26.746 fused_ordering(829) 00:14:26.746 fused_ordering(830) 00:14:26.746 fused_ordering(831) 00:14:26.747 fused_ordering(832) 00:14:26.747 fused_ordering(833) 00:14:26.747 fused_ordering(834) 00:14:26.747 fused_ordering(835) 00:14:26.747 fused_ordering(836) 00:14:26.747 fused_ordering(837) 00:14:26.747 fused_ordering(838) 00:14:26.747 fused_ordering(839) 00:14:26.747 fused_ordering(840) 00:14:26.747 fused_ordering(841) 00:14:26.747 fused_ordering(842) 00:14:26.747 fused_ordering(843) 00:14:26.747 fused_ordering(844) 00:14:26.747 fused_ordering(845) 00:14:26.747 fused_ordering(846) 00:14:26.747 fused_ordering(847) 00:14:26.747 fused_ordering(848) 00:14:26.747 fused_ordering(849) 00:14:26.747 fused_ordering(850) 00:14:26.747 fused_ordering(851) 00:14:26.747 fused_ordering(852) 00:14:26.747 fused_ordering(853) 00:14:26.747 fused_ordering(854) 00:14:26.747 fused_ordering(855) 00:14:26.747 fused_ordering(856) 00:14:26.747 fused_ordering(857) 00:14:26.747 fused_ordering(858) 00:14:26.747 fused_ordering(859) 00:14:26.747 fused_ordering(860) 00:14:26.747 fused_ordering(861) 00:14:26.747 fused_ordering(862) 00:14:26.747 fused_ordering(863) 00:14:26.747 fused_ordering(864) 00:14:26.747 fused_ordering(865) 00:14:26.747 fused_ordering(866) 00:14:26.747 fused_ordering(867) 00:14:26.747 fused_ordering(868) 00:14:26.747 fused_ordering(869) 00:14:26.747 fused_ordering(870) 00:14:26.747 fused_ordering(871) 00:14:26.747 fused_ordering(872) 00:14:26.747 fused_ordering(873) 00:14:26.747 fused_ordering(874) 00:14:26.747 fused_ordering(875) 00:14:26.747 fused_ordering(876) 00:14:26.747 fused_ordering(877) 00:14:26.747 fused_ordering(878) 00:14:26.747 fused_ordering(879) 00:14:26.747 fused_ordering(880) 00:14:26.747 fused_ordering(881) 00:14:26.747 fused_ordering(882) 00:14:26.747 fused_ordering(883) 00:14:26.747 fused_ordering(884) 00:14:26.747 fused_ordering(885) 00:14:26.747 fused_ordering(886) 00:14:26.747 fused_ordering(887) 00:14:26.747 fused_ordering(888) 00:14:26.747 fused_ordering(889) 00:14:26.747 fused_ordering(890) 00:14:26.747 fused_ordering(891) 00:14:26.747 fused_ordering(892) 00:14:26.747 fused_ordering(893) 00:14:26.747 fused_ordering(894) 00:14:26.747 fused_ordering(895) 00:14:26.747 fused_ordering(896) 00:14:26.747 fused_ordering(897) 00:14:26.747 fused_ordering(898) 00:14:26.747 fused_ordering(899) 00:14:26.747 fused_ordering(900) 00:14:26.747 fused_ordering(901) 00:14:26.747 fused_ordering(902) 00:14:26.747 fused_ordering(903) 00:14:26.747 fused_ordering(904) 00:14:26.747 fused_ordering(905) 00:14:26.747 fused_ordering(906) 00:14:26.747 fused_ordering(907) 00:14:26.747 fused_ordering(908) 00:14:26.747 fused_ordering(909) 00:14:26.747 fused_ordering(910) 00:14:26.747 fused_ordering(911) 00:14:26.747 fused_ordering(912) 00:14:26.747 fused_ordering(913) 00:14:26.747 fused_ordering(914) 00:14:26.747 fused_ordering(915) 00:14:26.747 fused_ordering(916) 00:14:26.747 fused_ordering(917) 00:14:26.747 fused_ordering(918) 00:14:26.747 fused_ordering(919) 00:14:26.747 fused_ordering(920) 00:14:26.747 fused_ordering(921) 00:14:26.747 fused_ordering(922) 00:14:26.747 fused_ordering(923) 00:14:26.747 fused_ordering(924) 00:14:26.747 fused_ordering(925) 00:14:26.747 fused_ordering(926) 00:14:26.747 fused_ordering(927) 00:14:26.747 fused_ordering(928) 00:14:26.747 fused_ordering(929) 00:14:26.747 fused_ordering(930) 00:14:26.747 fused_ordering(931) 00:14:26.747 fused_ordering(932) 00:14:26.747 fused_ordering(933) 00:14:26.747 fused_ordering(934) 00:14:26.747 fused_ordering(935) 00:14:26.747 fused_ordering(936) 00:14:26.747 fused_ordering(937) 00:14:26.747 fused_ordering(938) 00:14:26.747 fused_ordering(939) 00:14:26.747 fused_ordering(940) 00:14:26.747 fused_ordering(941) 00:14:26.747 fused_ordering(942) 00:14:26.747 fused_ordering(943) 00:14:26.747 fused_ordering(944) 00:14:26.747 fused_ordering(945) 00:14:26.747 fused_ordering(946) 00:14:26.747 fused_ordering(947) 00:14:26.747 fused_ordering(948) 00:14:26.747 fused_ordering(949) 00:14:26.747 fused_ordering(950) 00:14:26.747 fused_ordering(951) 00:14:26.747 fused_ordering(952) 00:14:26.747 fused_ordering(953) 00:14:26.747 fused_ordering(954) 00:14:26.747 fused_ordering(955) 00:14:26.747 fused_ordering(956) 00:14:26.747 fused_ordering(957) 00:14:26.747 fused_ordering(958) 00:14:26.747 fused_ordering(959) 00:14:26.747 fused_ordering(960) 00:14:26.747 fused_ordering(961) 00:14:26.747 fused_ordering(962) 00:14:26.747 fused_ordering(963) 00:14:26.747 fused_ordering(964) 00:14:26.747 fused_ordering(965) 00:14:26.747 fused_ordering(966) 00:14:26.747 fused_ordering(967) 00:14:26.747 fused_ordering(968) 00:14:26.747 fused_ordering(969) 00:14:26.747 fused_ordering(970) 00:14:26.747 fused_ordering(971) 00:14:26.747 fused_ordering(972) 00:14:26.747 fused_ordering(973) 00:14:26.747 fused_ordering(974) 00:14:26.747 fused_ordering(975) 00:14:26.747 fused_ordering(976) 00:14:26.747 fused_ordering(977) 00:14:26.747 fused_ordering(978) 00:14:26.747 fused_ordering(979) 00:14:26.747 fused_ordering(980) 00:14:26.747 fused_ordering(981) 00:14:26.747 fused_ordering(982) 00:14:26.747 fused_ordering(983) 00:14:26.747 fused_ordering(984) 00:14:26.747 fused_ordering(985) 00:14:26.747 fused_ordering(986) 00:14:26.747 fused_ordering(987) 00:14:26.747 fused_ordering(988) 00:14:26.747 fused_ordering(989) 00:14:26.747 fused_ordering(990) 00:14:26.747 fused_ordering(991) 00:14:26.747 fused_ordering(992) 00:14:26.747 fused_ordering(993) 00:14:26.747 fused_ordering(994) 00:14:26.747 fused_ordering(995) 00:14:26.747 fused_ordering(996) 00:14:26.747 fused_ordering(997) 00:14:26.747 fused_ordering(998) 00:14:26.747 fused_ordering(999) 00:14:26.747 fused_ordering(1000) 00:14:26.747 fused_ordering(1001) 00:14:26.747 fused_ordering(1002) 00:14:26.747 fused_ordering(1003) 00:14:26.747 fused_ordering(1004) 00:14:26.747 fused_ordering(1005) 00:14:26.747 fused_ordering(1006) 00:14:26.747 fused_ordering(1007) 00:14:26.747 fused_ordering(1008) 00:14:26.747 fused_ordering(1009) 00:14:26.747 fused_ordering(1010) 00:14:26.747 fused_ordering(1011) 00:14:26.747 fused_ordering(1012) 00:14:26.747 fused_ordering(1013) 00:14:26.747 fused_ordering(1014) 00:14:26.747 fused_ordering(1015) 00:14:26.747 fused_ordering(1016) 00:14:26.747 fused_ordering(1017) 00:14:26.747 fused_ordering(1018) 00:14:26.747 fused_ordering(1019) 00:14:26.747 fused_ordering(1020) 00:14:26.747 fused_ordering(1021) 00:14:26.747 fused_ordering(1022) 00:14:26.747 fused_ordering(1023) 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:26.747 rmmod nvme_rdma 00:14:26.747 rmmod nvme_fabrics 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4015249 ']' 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4015249 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 4015249 ']' 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 4015249 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4015249 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4015249' 00:14:26.747 killing process with pid 4015249 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 4015249 00:14:26.747 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 4015249 00:14:27.007 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.007 10:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:27.007 00:14:27.007 real 0m7.982s 00:14:27.007 user 0m4.409s 00:14:27.007 sys 0m4.818s 00:14:27.007 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:27.007 10:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.007 ************************************ 00:14:27.007 END TEST nvmf_fused_ordering 00:14:27.007 ************************************ 00:14:27.007 10:41:55 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:14:27.007 10:41:55 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:27.007 10:41:55 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:27.007 10:41:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:27.007 ************************************ 00:14:27.007 START TEST nvmf_delete_subsystem 00:14:27.007 ************************************ 00:14:27.007 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:14:27.266 * Looking for test storage... 00:14:27.266 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:27.266 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.266 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:27.266 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.266 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.266 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.267 10:41:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:33.838 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:33.838 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # modinfo irdma 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:33.838 Found net devices under 0000:af:00.0: cvl_0_0 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:33.838 Found net devices under 0000:af:00.1: cvl_0_1 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:33.838 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:14:33.839 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:33.839 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:33.839 altname enp175s0f0np0 00:14:33.839 altname ens801f0np0 00:14:33.839 inet 192.168.100.8/24 scope global cvl_0_0 00:14:33.839 valid_lft forever preferred_lft forever 00:14:33.839 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:33.839 valid_lft forever preferred_lft forever 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:14:33.839 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:33.839 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:33.839 altname enp175s0f1np1 00:14:33.839 altname ens801f1np1 00:14:33.839 inet 192.168.100.9/24 scope global cvl_0_1 00:14:33.839 valid_lft forever preferred_lft forever 00:14:33.839 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:33.839 valid_lft forever preferred_lft forever 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:33.839 192.168.100.9' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:33.839 192.168.100.9' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:33.839 192.168.100.9' 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:14:33.839 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4019162 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4019162 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 4019162 ']' 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:33.840 10:42:02 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.840 [2024-06-10 10:42:02.414293] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:33.840 [2024-06-10 10:42:02.414341] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.840 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.840 [2024-06-10 10:42:02.473941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.840 [2024-06-10 10:42:02.553010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.840 [2024-06-10 10:42:02.553045] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.840 [2024-06-10 10:42:02.553051] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.840 [2024-06-10 10:42:02.553057] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.840 [2024-06-10 10:42:02.553062] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.840 [2024-06-10 10:42:02.553104] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.840 [2024-06-10 10:42:02.553106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.409 [2024-06-10 10:42:03.281432] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x234e2d0/0x234d910) succeed. 00:14:34.409 [2024-06-10 10:42:03.290155] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x234f580/0x234de90) succeed. 00:14:34.409 [2024-06-10 10:42:03.290177] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.409 [2024-06-10 10:42:03.306386] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.409 NULL1 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.409 Delay0 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4019405 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:34.409 10:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:34.409 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.409 [2024-06-10 10:42:03.400827] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:36.312 10:42:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.312 10:42:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.312 10:42:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.880 [2024-06-10 10:42:05.901760] nvme_rdma.c:2494:nvme_rdma_log_wc_status: *ERROR*: WC error, qid 2, qp state 1, request 0x35184374496496 type 1, status: (12): transport retry counter exceeded 00:14:36.880 NVMe io qpair process completion error 00:14:36.880 NVMe io qpair process completion error 00:14:36.880 Read completed with error (sct=0, sc=8) 00:14:36.880 starting I/O failed: -6 00:14:36.880 Write completed with error (sct=0, sc=8) 00:14:36.880 Read completed with error (sct=0, sc=8) 00:14:36.880 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Write completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 starting I/O failed: -6 00:14:36.881 Read completed with error (sct=0, sc=8) 00:14:36.881 NVMe io qpair process completion error 00:14:37.449 [2024-06-10 10:42:06.464974] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 starting I/O failed: -6 00:14:37.449 [2024-06-10 10:42:06.465539] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Write completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.449 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 [2024-06-10 10:42:06.465827] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.450 Write completed with error (sct=0, sc=8) 00:14:37.450 Read completed with error (sct=0, sc=8) 00:14:37.709 10:42:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:37.709 10:42:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:37.709 10:42:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4019405 00:14:37.709 10:42:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:38.277 10:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:38.277 10:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4019405 00:14:38.277 10:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:38.536 NVMe io qpair process completion error 00:14:38.536 NVMe io qpair process completion error 00:14:38.536 10:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:38.536 10:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4019405 00:14:38.536 10:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:39.103 [2024-06-10 10:42:07.999973] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 [2024-06-10 10:42:08.000311] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Write completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 [2024-06-10 10:42:08.000548] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.103 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 [2024-06-10 10:42:08.000772] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Write completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Read completed with error (sct=0, sc=8) 00:14:39.104 Initializing NVMe Controllers 00:14:39.104 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.104 Controller IO queue size 128, less than required. 00:14:39.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:39.104 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:39.104 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:39.104 Initialization complete. Launching workers. 00:14:39.104 ======================================================== 00:14:39.104 Latency(us) 00:14:39.104 Device Information : IOPS MiB/s Average min max 00:14:39.104 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 141.92 0.07 1320187.50 418152.31 2512951.70 00:14:39.104 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 141.92 0.07 1359256.99 975265.35 2510298.21 00:14:39.104 ======================================================== 00:14:39.104 Total : 283.84 0.14 1339722.24 418152.31 2512951.70 00:14:39.104 00:14:39.104 [2024-06-10 10:42:08.001571] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:39.104 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:39.104 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4019405 00:14:39.104 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:39.104 [2024-06-10 10:42:08.015359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:39.104 [2024-06-10 10:42:08.015376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:39.104 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4019405 00:14:39.671 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4019405) - No such process 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4019405 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 4019405 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 4019405 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:14:39.671 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.672 [2024-06-10 10:42:08.536649] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4020442 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:39.672 10:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:39.672 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.672 [2024-06-10 10:42:08.615595] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:40.240 10:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.240 10:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:40.240 10:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.807 10:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.807 10:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:40.807 10:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.065 10:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.065 10:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:41.065 10:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.633 10:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.633 10:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:41.633 10:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.201 10:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.201 10:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:42.201 10:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.770 10:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.770 10:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:42.770 10:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.433 10:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.433 10:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:43.433 10:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.692 10:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.692 10:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:43.692 10:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.260 10:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.260 10:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:44.260 10:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.827 10:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.827 10:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:44.827 10:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.086 10:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.086 10:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:45.086 10:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:45.653 10:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:45.653 10:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:45.653 10:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.221 10:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.221 10:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:46.221 10:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.788 10:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:46.788 10:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:46.788 10:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:46.788 Initializing NVMe Controllers 00:14:46.788 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.788 Controller IO queue size 128, less than required. 00:14:46.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.788 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:46.788 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:46.788 Initialization complete. Launching workers. 00:14:46.788 ======================================================== 00:14:46.788 Latency(us) 00:14:46.788 Device Information : IOPS MiB/s Average min max 00:14:46.788 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001367.63 1000068.57 1004062.28 00:14:46.788 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002492.77 1000131.62 1006299.91 00:14:46.788 ======================================================== 00:14:46.788 Total : 256.00 0.12 1001930.20 1000068.57 1006299.91 00:14:46.788 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4020442 00:14:47.356 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4020442) - No such process 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4020442 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:47.356 rmmod nvme_rdma 00:14:47.356 rmmod nvme_fabrics 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4019162 ']' 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4019162 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 4019162 ']' 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 4019162 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4019162 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4019162' 00:14:47.356 killing process with pid 4019162 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 4019162 00:14:47.356 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 4019162 00:14:47.615 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.615 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:47.615 00:14:47.615 real 0m20.379s 00:14:47.615 user 0m52.073s 00:14:47.615 sys 0m5.709s 00:14:47.615 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:47.615 10:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:47.615 ************************************ 00:14:47.616 END TEST nvmf_delete_subsystem 00:14:47.616 ************************************ 00:14:47.616 10:42:16 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:14:47.616 10:42:16 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:47.616 10:42:16 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:47.616 10:42:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:47.616 ************************************ 00:14:47.616 START TEST nvmf_ns_masking 00:14:47.616 ************************************ 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:14:47.616 * Looking for test storage... 00:14:47.616 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=0a7e80ec-b989-44fc-b6e2-1f35a3cfd559 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.616 10:42:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:54.183 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:54.183 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:54.183 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@377 -- # modinfo irdma 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:54.184 Found net devices under 0000:af:00.0: cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:54.184 Found net devices under 0000:af:00.1: cvl_0_1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:14:54.184 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:54.184 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:54.184 altname enp175s0f0np0 00:14:54.184 altname ens801f0np0 00:14:54.184 inet 192.168.100.8/24 scope global cvl_0_0 00:14:54.184 valid_lft forever preferred_lft forever 00:14:54.184 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:54.184 valid_lft forever preferred_lft forever 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:14:54.184 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:54.184 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:54.184 altname enp175s0f1np1 00:14:54.184 altname ens801f1np1 00:14:54.184 inet 192.168.100.9/24 scope global cvl_0_1 00:14:54.184 valid_lft forever preferred_lft forever 00:14:54.184 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:54.184 valid_lft forever preferred_lft forever 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:54.184 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:54.185 192.168.100.9' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:54.185 192.168.100.9' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:54.185 192.168.100.9' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4025387 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4025387 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 4025387 ']' 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:54.185 10:42:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:54.185 [2024-06-10 10:42:22.452302] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:54.185 [2024-06-10 10:42:22.452345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.185 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.185 [2024-06-10 10:42:22.514491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.185 [2024-06-10 10:42:22.591624] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.185 [2024-06-10 10:42:22.591664] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.185 [2024-06-10 10:42:22.591671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.185 [2024-06-10 10:42:22.591677] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.185 [2024-06-10 10:42:22.591682] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.185 [2024-06-10 10:42:22.591738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.185 [2024-06-10 10:42:22.591835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.185 [2024-06-10 10:42:22.591921] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.185 [2024-06-10 10:42:22.591922] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.443 10:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:54.443 10:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:14:54.443 10:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.443 10:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:54.443 10:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:54.443 10:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.443 10:42:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:54.443 [2024-06-10 10:42:23.473649] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1bf78f0/0x1bf6f30) succeed. 00:14:54.702 [2024-06-10 10:42:23.482503] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1bf8ca0/0x1bf74b0) succeed. 00:14:54.702 [2024-06-10 10:42:23.482525] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:54.702 10:42:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:54.702 10:42:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:54.702 10:42:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:54.702 Malloc1 00:14:54.702 10:42:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:54.960 Malloc2 00:14:54.960 10:42:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:55.219 10:42:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:55.478 10:42:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:55.478 [2024-06-10 10:42:24.410830] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:55.478 10:42:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:55.478 10:42:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0a7e80ec-b989-44fc-b6e2-1f35a3cfd559 -a 192.168.100.8 -s 4420 -i 4 00:14:55.737 10:42:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:55.737 10:42:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:14:55.737 10:42:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.737 10:42:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:14:55.737 10:42:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:57.642 [ 0]:0x1 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=70032bde449747f59973b14df5d8288a 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 70032bde449747f59973b14df5d8288a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.642 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:57.901 [ 0]:0x1 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=70032bde449747f59973b14df5d8288a 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 70032bde449747f59973b14df5d8288a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:57.901 [ 1]:0x2 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:57.901 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:58.160 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b6b672b567044a98293fac69c5a8890 00:14:58.160 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b6b672b567044a98293fac69c5a8890 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.160 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:58.160 10:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.419 10:42:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.678 10:42:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:58.937 10:42:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:58.937 10:42:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0a7e80ec-b989-44fc-b6e2-1f35a3cfd559 -a 192.168.100.8 -s 4420 -i 4 00:14:58.937 10:42:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:58.937 10:42:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:14:58.937 10:42:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.937 10:42:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:14:58.937 10:42:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:14:58.937 10:42:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:00.843 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:00.843 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:00.843 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.843 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:00.843 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.843 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:00.843 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:00.843 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:01.102 [ 0]:0x2 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.102 10:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.102 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b6b672b567044a98293fac69c5a8890 00:15:01.102 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b6b672b567044a98293fac69c5a8890 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.102 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.361 [ 0]:0x1 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=70032bde449747f59973b14df5d8288a 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 70032bde449747f59973b14df5d8288a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:01.361 [ 1]:0x2 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b6b672b567044a98293fac69c5a8890 00:15:01.361 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b6b672b567044a98293fac69c5a8890 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.362 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:01.621 [ 0]:0x2 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b6b672b567044a98293fac69c5a8890 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b6b672b567044a98293fac69c5a8890 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:01.621 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.189 10:42:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:02.189 10:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:02.189 10:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0a7e80ec-b989-44fc-b6e2-1f35a3cfd559 -a 192.168.100.8 -s 4420 -i 4 00:15:02.189 10:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:02.189 10:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:02.189 10:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.189 10:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:15:02.189 10:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:15:02.189 10:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.777 [ 0]:0x1 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=70032bde449747f59973b14df5d8288a 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 70032bde449747f59973b14df5d8288a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.777 [ 1]:0x2 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b6b672b567044a98293fac69c5a8890 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b6b672b567044a98293fac69c5a8890 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:04.777 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:04.778 [ 0]:0x2 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b6b672b567044a98293fac69c5a8890 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b6b672b567044a98293fac69c5a8890 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:15:04.778 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:04.778 [2024-06-10 10:42:33.801635] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:04.778 request: 00:15:04.778 { 00:15:04.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.778 "nsid": 2, 00:15:04.778 "host": "nqn.2016-06.io.spdk:host1", 00:15:04.778 "method": "nvmf_ns_remove_host", 00:15:04.778 "req_id": 1 00:15:04.778 } 00:15:04.778 Got JSON-RPC error response 00:15:04.778 response: 00:15:04.778 { 00:15:04.778 "code": -32602, 00:15:04.778 "message": "Invalid parameters" 00:15:04.778 } 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:05.037 [ 0]:0x2 00:15:05.037 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:05.038 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:05.038 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b6b672b567044a98293fac69c5a8890 00:15:05.038 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b6b672b567044a98293fac69c5a8890 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.038 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:05.038 10:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.297 10:42:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:05.556 rmmod nvme_rdma 00:15:05.556 rmmod nvme_fabrics 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4025387 ']' 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4025387 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 4025387 ']' 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 4025387 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4025387 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4025387' 00:15:05.556 killing process with pid 4025387 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 4025387 00:15:05.556 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 4025387 00:15:05.815 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.815 10:42:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:05.815 00:15:05.815 real 0m18.348s 00:15:05.815 user 0m53.509s 00:15:05.815 sys 0m5.608s 00:15:05.815 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:05.815 10:42:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:05.815 ************************************ 00:15:05.815 END TEST nvmf_ns_masking 00:15:05.815 ************************************ 00:15:06.074 10:42:34 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:06.074 10:42:34 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:15:06.074 10:42:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:06.074 10:42:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:06.074 10:42:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:06.074 ************************************ 00:15:06.074 START TEST nvmf_nvme_cli 00:15:06.074 ************************************ 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:15:06.074 * Looking for test storage... 00:15:06.074 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.074 10:42:34 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.075 10:42:34 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:12.646 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:12.646 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@377 -- # modinfo irdma 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:12.646 Found net devices under 0000:af:00.0: cvl_0_0 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:12.646 Found net devices under 0000:af:00.1: cvl_0_1 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.646 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:12.647 10:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:15:12.647 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:12.647 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:12.647 altname enp175s0f0np0 00:15:12.647 altname ens801f0np0 00:15:12.647 inet 192.168.100.8/24 scope global cvl_0_0 00:15:12.647 valid_lft forever preferred_lft forever 00:15:12.647 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:12.647 valid_lft forever preferred_lft forever 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:15:12.647 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:12.647 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:12.647 altname enp175s0f1np1 00:15:12.647 altname ens801f1np1 00:15:12.647 inet 192.168.100.9/24 scope global cvl_0_1 00:15:12.647 valid_lft forever preferred_lft forever 00:15:12.647 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:12.647 valid_lft forever preferred_lft forever 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:12.647 192.168.100.9' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:12.647 192.168.100.9' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:12.647 192.168.100.9' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:12.647 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4030919 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4030919 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 4030919 ']' 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:12.648 10:42:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.648 [2024-06-10 10:42:41.225837] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:12.648 [2024-06-10 10:42:41.225877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.648 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.648 [2024-06-10 10:42:41.290888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.648 [2024-06-10 10:42:41.372704] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.648 [2024-06-10 10:42:41.372744] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.648 [2024-06-10 10:42:41.372751] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.648 [2024-06-10 10:42:41.372757] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.648 [2024-06-10 10:42:41.372762] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.648 [2024-06-10 10:42:41.372822] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.648 [2024-06-10 10:42:41.372916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.648 [2024-06-10 10:42:41.372935] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.648 [2024-06-10 10:42:41.372940] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 [2024-06-10 10:42:42.075506] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x17e18f0/0x17e0f30) succeed. 00:15:13.216 [2024-06-10 10:42:42.084368] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x17e2ca0/0x17e14b0) succeed. 00:15:13.216 [2024-06-10 10:42:42.084390] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 Malloc0 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 Malloc1 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 [2024-06-10 10:42:42.169648] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.216 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:15:13.475 00:15:13.475 Discovery Log Number of Records 2, Generation counter 2 00:15:13.475 =====Discovery Log Entry 0====== 00:15:13.475 trtype: rdma 00:15:13.475 adrfam: ipv4 00:15:13.475 subtype: current discovery subsystem 00:15:13.475 treq: not required 00:15:13.475 portid: 0 00:15:13.475 trsvcid: 4420 00:15:13.475 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:13.475 traddr: 192.168.100.8 00:15:13.475 eflags: explicit discovery connections, duplicate discovery information 00:15:13.475 rdma_prtype: not specified 00:15:13.475 rdma_qptype: connected 00:15:13.475 rdma_cms: rdma-cm 00:15:13.475 rdma_pkey: 0x0000 00:15:13.475 =====Discovery Log Entry 1====== 00:15:13.475 trtype: rdma 00:15:13.475 adrfam: ipv4 00:15:13.475 subtype: nvme subsystem 00:15:13.475 treq: not required 00:15:13.475 portid: 0 00:15:13.475 trsvcid: 4420 00:15:13.475 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:13.475 traddr: 192.168.100.8 00:15:13.475 eflags: none 00:15:13.475 rdma_prtype: not specified 00:15:13.475 rdma_qptype: connected 00:15:13.475 rdma_cms: rdma-cm 00:15:13.475 rdma_pkey: 0x0000 00:15:13.475 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:13.475 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:13.475 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:13.475 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.475 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:13.475 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:13.475 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.475 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n2 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n1 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:15:13.476 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:13.734 10:42:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:13.734 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:15:13.734 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.734 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:15:13.734 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:15:13.734 10:42:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n2 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n1 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme1n2 00:15:15.639 /dev/nvme1n1 00:15:15.639 /dev/nvme0n2 00:15:15.639 /dev/nvme0n1 ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n2 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n1 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:15:15.639 10:42:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.576 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:16.576 rmmod nvme_rdma 00:15:16.836 rmmod nvme_fabrics 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4030919 ']' 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4030919 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 4030919 ']' 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 4030919 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4030919 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4030919' 00:15:16.837 killing process with pid 4030919 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 4030919 00:15:16.837 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 4030919 00:15:17.206 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.206 10:42:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:17.206 00:15:17.206 real 0m11.047s 00:15:17.206 user 0m19.976s 00:15:17.206 sys 0m5.176s 00:15:17.206 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:17.206 10:42:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.206 ************************************ 00:15:17.206 END TEST nvmf_nvme_cli 00:15:17.206 ************************************ 00:15:17.206 10:42:45 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:15:17.206 10:42:45 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:15:17.206 10:42:45 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:17.206 10:42:45 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:17.206 10:42:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:17.207 ************************************ 00:15:17.207 START TEST nvmf_host_management 00:15:17.207 ************************************ 00:15:17.207 10:42:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:15:17.207 * Looking for test storage... 00:15:17.207 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.207 10:42:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:22.565 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:22.566 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:22.566 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@377 -- # modinfo irdma 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:22.566 Found net devices under 0000:af:00.0: cvl_0_0 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:22.566 Found net devices under 0000:af:00.1: cvl_0_1 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:15:22.566 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:22.566 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:22.566 altname enp175s0f0np0 00:15:22.566 altname ens801f0np0 00:15:22.566 inet 192.168.100.8/24 scope global cvl_0_0 00:15:22.566 valid_lft forever preferred_lft forever 00:15:22.566 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:22.566 valid_lft forever preferred_lft forever 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:15:22.566 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:22.566 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:22.566 altname enp175s0f1np1 00:15:22.566 altname ens801f1np1 00:15:22.566 inet 192.168.100.9/24 scope global cvl_0_1 00:15:22.566 valid_lft forever preferred_lft forever 00:15:22.566 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:22.566 valid_lft forever preferred_lft forever 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:22.566 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:22.567 192.168.100.9' 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:22.567 192.168.100.9' 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:22.567 192.168.100.9' 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:22.567 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4035179 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4035179 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 4035179 ']' 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:22.826 10:42:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:22.826 [2024-06-10 10:42:51.659259] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:22.826 [2024-06-10 10:42:51.659303] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.826 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.826 [2024-06-10 10:42:51.718895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.826 [2024-06-10 10:42:51.797158] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.826 [2024-06-10 10:42:51.797194] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.826 [2024-06-10 10:42:51.797201] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.826 [2024-06-10 10:42:51.797207] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.826 [2024-06-10 10:42:51.797212] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.826 [2024-06-10 10:42:51.797254] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.826 [2024-06-10 10:42:51.797341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.826 [2024-06-10 10:42:51.797455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.826 [2024-06-10 10:42:51.797455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 [2024-06-10 10:42:52.517235] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1641be0/0x1641220) succeed. 00:15:23.762 [2024-06-10 10:42:52.526121] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1642f90/0x16417a0) succeed. 00:15:23.762 [2024-06-10 10:42:52.526143] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 Malloc0 00:15:23.762 [2024-06-10 10:42:52.589140] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4035309 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4035309 /var/tmp/bdevperf.sock 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 4035309 ']' 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:23.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.762 { 00:15:23.762 "params": { 00:15:23.762 "name": "Nvme$subsystem", 00:15:23.762 "trtype": "$TEST_TRANSPORT", 00:15:23.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.762 "adrfam": "ipv4", 00:15:23.762 "trsvcid": "$NVMF_PORT", 00:15:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.762 "hdgst": ${hdgst:-false}, 00:15:23.762 "ddgst": ${ddgst:-false} 00:15:23.762 }, 00:15:23.762 "method": "bdev_nvme_attach_controller" 00:15:23.762 } 00:15:23.762 EOF 00:15:23.762 )") 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:23.762 10:42:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.762 "params": { 00:15:23.762 "name": "Nvme0", 00:15:23.762 "trtype": "rdma", 00:15:23.762 "traddr": "192.168.100.8", 00:15:23.762 "adrfam": "ipv4", 00:15:23.762 "trsvcid": "4420", 00:15:23.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:23.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:23.762 "hdgst": false, 00:15:23.762 "ddgst": false 00:15:23.762 }, 00:15:23.762 "method": "bdev_nvme_attach_controller" 00:15:23.762 }' 00:15:23.762 [2024-06-10 10:42:52.678182] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:23.762 [2024-06-10 10:42:52.678229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035309 ] 00:15:23.762 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.762 [2024-06-10 10:42:52.741800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.021 [2024-06-10 10:42:52.814371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.021 Running I/O for 10 seconds... 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1659 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1659 -ge 100 ']' 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.589 10:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:25.159 [2024-06-10 10:42:54.144978] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:25.159 [2024-06-10 10:42:54.145015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x30f2aaf 00:15:25.159 [2024-06-10 10:42:54.145025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.159 [2024-06-10 10:42:54.145040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x30f2aaf 00:15:25.159 [2024-06-10 10:42:54.145048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.159 [2024-06-10 10:42:54.145057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x30f2aaf 00:15:25.159 [2024-06-10 10:42:54.145067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.159 [2024-06-10 10:42:54.145075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x30f2aaf 00:15:25.159 [2024-06-10 10:42:54.145082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.159 [2024-06-10 10:42:54.145090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x30f2aaf 00:15:25.159 [2024-06-10 10:42:54.145096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.159 [2024-06-10 10:42:54.145104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x30f2aaf 00:15:25.159 [2024-06-10 10:42:54.145110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.159 [2024-06-10 10:42:54.145118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x30f2aaf 00:15:25.159 [2024-06-10 10:42:54.145125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.159 [2024-06-10 10:42:54.145132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x30f2aaf 00:15:25.159 [2024-06-10 10:42:54.145139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x30f2aaf 00:15:25.160 [2024-06-10 10:42:54.145153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x30f2aaf 00:15:25.160 [2024-06-10 10:42:54.145167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x30f2aaf 00:15:25.160 [2024-06-10 10:42:54.145181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x30f2aaf 00:15:25.160 [2024-06-10 10:42:54.145195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x30f2aaf 00:15:25.160 [2024-06-10 10:42:54.145209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x30f2aaf 00:15:25.160 [2024-06-10 10:42:54.145225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x3ae85da2 00:15:25.160 [2024-06-10 10:42:54.145423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x8cdb6bf1 00:15:25.160 [2024-06-10 10:42:54.145621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.160 [2024-06-10 10:42:54.145629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x8cdb6bf1 00:15:25.161 [2024-06-10 10:42:54.145635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x78456b28 00:15:25.161 [2024-06-10 10:42:54.145824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x462c6552 00:15:25.161 [2024-06-10 10:42:54.145838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x462c6552 00:15:25.161 [2024-06-10 10:42:54.145852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0x462c6552 00:15:25.161 [2024-06-10 10:42:54.145867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9b000 len:0x10000 key:0x462c6552 00:15:25.161 [2024-06-10 10:42:54.145881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbbc000 len:0x10000 key:0x462c6552 00:15:25.161 [2024-06-10 10:42:54.145895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbdd000 len:0x10000 key:0x462c6552 00:15:25.161 [2024-06-10 10:42:54.145910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x462c6552 00:15:25.161 [2024-06-10 10:42:54.145923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.145931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0x462c6552 00:15:25.161 [2024-06-10 10:42:54.145938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:596b70 sqhd:1580 p:0 m:0 dnr:0 00:15:25.161 [2024-06-10 10:42:54.146219] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:15:25.161 [2024-06-10 10:42:54.147108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:25.161 task offset: 98304 on job bdev=Nvme0n1 fails 00:15:25.161 00:15:25.161 Latency(us) 00:15:25.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.161 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:25.161 Job: Nvme0n1 ended in about 1.16 seconds with error 00:15:25.161 Verification LBA range: start 0x0 length 0x400 00:15:25.161 Nvme0n1 : 1.16 1533.37 95.84 55.01 0.00 39782.07 1763.23 591197.14 00:15:25.161 =================================================================================================================== 00:15:25.161 Total : 1533.37 95.84 55.01 0.00 39782.07 1763.23 591197.14 00:15:25.161 [2024-06-10 10:42:54.148686] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:25.161 [2024-06-10 10:42:54.148698] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:25.161 [2024-06-10 10:42:54.161979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:25.161 [2024-06-10 10:42:54.182669] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4035309 00:15:25.730 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4035309) - No such process 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.730 { 00:15:25.730 "params": { 00:15:25.730 "name": "Nvme$subsystem", 00:15:25.730 "trtype": "$TEST_TRANSPORT", 00:15:25.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.730 "adrfam": "ipv4", 00:15:25.730 "trsvcid": "$NVMF_PORT", 00:15:25.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.730 "hdgst": ${hdgst:-false}, 00:15:25.730 "ddgst": ${ddgst:-false} 00:15:25.730 }, 00:15:25.730 "method": "bdev_nvme_attach_controller" 00:15:25.730 } 00:15:25.730 EOF 00:15:25.730 )") 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:25.730 10:42:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.730 "params": { 00:15:25.730 "name": "Nvme0", 00:15:25.730 "trtype": "rdma", 00:15:25.730 "traddr": "192.168.100.8", 00:15:25.730 "adrfam": "ipv4", 00:15:25.730 "trsvcid": "4420", 00:15:25.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:25.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:25.730 "hdgst": false, 00:15:25.730 "ddgst": false 00:15:25.730 }, 00:15:25.730 "method": "bdev_nvme_attach_controller" 00:15:25.730 }' 00:15:25.730 [2024-06-10 10:42:54.625171] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:25.730 [2024-06-10 10:42:54.625216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035693 ] 00:15:25.730 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.730 [2024-06-10 10:42:54.684776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.730 [2024-06-10 10:42:54.752975] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.025 Running I/O for 1 seconds... 00:15:26.979 00:15:26.979 Latency(us) 00:15:26.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.979 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:26.979 Verification LBA range: start 0x0 length 0x400 00:15:26.979 Nvme0n1 : 1.02 3131.96 195.75 0.00 0.00 20017.84 1786.64 33704.23 00:15:26.979 =================================================================================================================== 00:15:26.979 Total : 3131.96 195.75 0.00 0.00 20017.84 1786.64 33704.23 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:27.239 rmmod nvme_rdma 00:15:27.239 rmmod nvme_fabrics 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4035179 ']' 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4035179 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 4035179 ']' 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 4035179 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4035179 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4035179' 00:15:27.239 killing process with pid 4035179 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 4035179 00:15:27.239 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 4035179 00:15:27.498 [2024-06-10 10:42:56.459813] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:27.498 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.498 10:42:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:27.498 10:42:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:27.498 00:15:27.498 real 0m10.484s 00:15:27.498 user 0m23.518s 00:15:27.498 sys 0m5.026s 00:15:27.498 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:27.498 10:42:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:27.498 ************************************ 00:15:27.498 END TEST nvmf_host_management 00:15:27.498 ************************************ 00:15:27.498 10:42:56 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:15:27.498 10:42:56 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:27.498 10:42:56 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:27.498 10:42:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:27.758 ************************************ 00:15:27.758 START TEST nvmf_lvol 00:15:27.758 ************************************ 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:15:27.758 * Looking for test storage... 00:15:27.758 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.758 10:42:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.759 10:42:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:34.332 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:34.332 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@377 -- # modinfo irdma 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:34.332 Found net devices under 0000:af:00.0: cvl_0_0 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:34.332 Found net devices under 0000:af:00.1: cvl_0_1 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:34.332 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:15:34.332 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:34.332 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:34.332 altname enp175s0f0np0 00:15:34.332 altname ens801f0np0 00:15:34.332 inet 192.168.100.8/24 scope global cvl_0_0 00:15:34.332 valid_lft forever preferred_lft forever 00:15:34.332 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:34.333 valid_lft forever preferred_lft forever 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:15:34.333 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:34.333 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:34.333 altname enp175s0f1np1 00:15:34.333 altname ens801f1np1 00:15:34.333 inet 192.168.100.9/24 scope global cvl_0_1 00:15:34.333 valid_lft forever preferred_lft forever 00:15:34.333 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:34.333 valid_lft forever preferred_lft forever 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:34.333 192.168.100.9' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:34.333 192.168.100.9' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:34.333 192.168.100.9' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4039468 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4039468 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 4039468 ']' 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:34.333 10:43:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.333 [2024-06-10 10:43:02.777381] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:34.333 [2024-06-10 10:43:02.777431] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.333 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.333 [2024-06-10 10:43:02.837142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.333 [2024-06-10 10:43:02.914569] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.333 [2024-06-10 10:43:02.914604] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.333 [2024-06-10 10:43:02.914610] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.333 [2024-06-10 10:43:02.914616] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.333 [2024-06-10 10:43:02.914622] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.333 [2024-06-10 10:43:02.914665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.333 [2024-06-10 10:43:02.914759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.333 [2024-06-10 10:43:02.914761] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.593 10:43:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:34.593 10:43:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:15:34.593 10:43:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.593 10:43:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:34.593 10:43:03 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:34.593 10:43:03 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.593 10:43:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:34.851 [2024-06-10 10:43:03.792167] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x767dd0/0x767410) succeed. 00:15:34.851 [2024-06-10 10:43:03.800944] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x769100/0x767990) succeed. 00:15:34.851 [2024-06-10 10:43:03.800970] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:34.852 10:43:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.110 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:35.110 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:35.369 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:35.369 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:35.369 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:35.628 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=eaa68899-7c10-46cd-9a81-44d9647f62b3 00:15:35.628 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eaa68899-7c10-46cd-9a81-44d9647f62b3 lvol 20 00:15:35.887 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8eef65fc-6c58-4497-ae87-56fe5d75ccb7 00:15:35.887 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:36.145 10:43:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8eef65fc-6c58-4497-ae87-56fe5d75ccb7 00:15:36.145 10:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:36.404 [2024-06-10 10:43:05.263097] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:36.404 10:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:36.663 10:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4039958 00:15:36.663 10:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:36.663 10:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:36.663 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.600 10:43:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8eef65fc-6c58-4497-ae87-56fe5d75ccb7 MY_SNAPSHOT 00:15:37.859 10:43:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fdd06f1e-5d6f-4e0b-be74-2d247be69065 00:15:37.859 10:43:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8eef65fc-6c58-4497-ae87-56fe5d75ccb7 30 00:15:37.859 10:43:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fdd06f1e-5d6f-4e0b-be74-2d247be69065 MY_CLONE 00:15:38.118 10:43:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=229b0a40-e331-4659-bc30-c9d645511598 00:15:38.118 10:43:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 229b0a40-e331-4659-bc30-c9d645511598 00:15:38.377 10:43:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4039958 00:15:48.357 Initializing NVMe Controllers 00:15:48.357 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:48.357 Controller IO queue size 128, less than required. 00:15:48.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:48.357 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:48.357 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:48.357 Initialization complete. Launching workers. 00:15:48.357 ======================================================== 00:15:48.357 Latency(us) 00:15:48.357 Device Information : IOPS MiB/s Average min max 00:15:48.357 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16667.70 65.11 7681.40 2152.66 34192.39 00:15:48.357 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16705.90 65.26 7663.70 3454.74 38027.93 00:15:48.357 ======================================================== 00:15:48.357 Total : 33373.60 130.37 7672.54 2152.66 38027.93 00:15:48.357 00:15:48.357 10:43:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:48.357 10:43:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8eef65fc-6c58-4497-ae87-56fe5d75ccb7 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eaa68899-7c10-46cd-9a81-44d9647f62b3 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.357 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:48.357 rmmod nvme_rdma 00:15:48.616 rmmod nvme_fabrics 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4039468 ']' 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4039468 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 4039468 ']' 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 4039468 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4039468 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4039468' 00:15:48.616 killing process with pid 4039468 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 4039468 00:15:48.616 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 4039468 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:48.875 00:15:48.875 real 0m21.182s 00:15:48.875 user 1m10.765s 00:15:48.875 sys 0m5.587s 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:48.875 ************************************ 00:15:48.875 END TEST nvmf_lvol 00:15:48.875 ************************************ 00:15:48.875 10:43:17 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:15:48.875 10:43:17 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:48.875 10:43:17 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:48.875 10:43:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:48.875 ************************************ 00:15:48.875 START TEST nvmf_lvs_grow 00:15:48.875 ************************************ 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:15:48.875 * Looking for test storage... 00:15:48.875 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.875 10:43:17 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:48.876 10:43:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.500 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:55.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:55.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@377 -- # modinfo irdma 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:55.501 Found net devices under 0000:af:00.0: cvl_0_0 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:55.501 Found net devices under 0000:af:00.1: cvl_0_1 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:15:55.501 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:55.501 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:55.501 altname enp175s0f0np0 00:15:55.501 altname ens801f0np0 00:15:55.501 inet 192.168.100.8/24 scope global cvl_0_0 00:15:55.501 valid_lft forever preferred_lft forever 00:15:55.501 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:55.501 valid_lft forever preferred_lft forever 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:15:55.501 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:55.501 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:55.501 altname enp175s0f1np1 00:15:55.501 altname ens801f1np1 00:15:55.501 inet 192.168.100.9/24 scope global cvl_0_1 00:15:55.501 valid_lft forever preferred_lft forever 00:15:55.501 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:55.501 valid_lft forever preferred_lft forever 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:55.501 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:55.502 192.168.100.9' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:55.502 192.168.100.9' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:55.502 192.168.100.9' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4045317 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4045317 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 4045317 ']' 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:55.502 10:43:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 [2024-06-10 10:43:23.755934] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:55.502 [2024-06-10 10:43:23.755982] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.502 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.502 [2024-06-10 10:43:23.814620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.502 [2024-06-10 10:43:23.892163] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.502 [2024-06-10 10:43:23.892196] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.502 [2024-06-10 10:43:23.892203] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.502 [2024-06-10 10:43:23.892209] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.502 [2024-06-10 10:43:23.892213] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.502 [2024-06-10 10:43:23.892234] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:55.761 [2024-06-10 10:43:24.760236] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x11e47e0/0x11e3e20) succeed. 00:15:55.761 [2024-06-10 10:43:24.769984] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x11e5a90/0x11e43a0) succeed. 00:15:55.761 [2024-06-10 10:43:24.770007] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:55.761 10:43:24 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:56.020 ************************************ 00:15:56.021 START TEST lvs_grow_clean 00:15:56.021 ************************************ 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:56.021 10:43:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:56.021 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:56.021 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:56.280 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=aecfe403-472b-44f6-9f2f-186a1fafcb80 00:15:56.280 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:15:56.280 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:56.539 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:56.539 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:56.539 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aecfe403-472b-44f6-9f2f-186a1fafcb80 lvol 150 00:15:56.539 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=96646f51-a4d0-43d5-ac31-3a91bb20c923 00:15:56.539 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:56.539 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:56.798 [2024-06-10 10:43:25.674660] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:56.798 [2024-06-10 10:43:25.674709] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:56.798 true 00:15:56.798 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:15:56.798 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:57.058 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:57.058 10:43:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:57.058 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96646f51-a4d0-43d5-ac31-3a91bb20c923 00:15:57.317 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:57.317 [2024-06-10 10:43:26.324532] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:57.317 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4045817 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4045817 /var/tmp/bdevperf.sock 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 4045817 ']' 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:57.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:57.576 10:43:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:57.576 [2024-06-10 10:43:26.515778] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:57.576 [2024-06-10 10:43:26.515823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045817 ] 00:15:57.576 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.576 [2024-06-10 10:43:26.572868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.835 [2024-06-10 10:43:26.649920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.404 10:43:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:58.404 10:43:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:15:58.404 10:43:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:58.663 Nvme0n1 00:15:58.663 10:43:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:58.922 [ 00:15:58.922 { 00:15:58.922 "name": "Nvme0n1", 00:15:58.922 "aliases": [ 00:15:58.922 "96646f51-a4d0-43d5-ac31-3a91bb20c923" 00:15:58.922 ], 00:15:58.922 "product_name": "NVMe disk", 00:15:58.922 "block_size": 4096, 00:15:58.922 "num_blocks": 38912, 00:15:58.922 "uuid": "96646f51-a4d0-43d5-ac31-3a91bb20c923", 00:15:58.922 "assigned_rate_limits": { 00:15:58.922 "rw_ios_per_sec": 0, 00:15:58.922 "rw_mbytes_per_sec": 0, 00:15:58.922 "r_mbytes_per_sec": 0, 00:15:58.922 "w_mbytes_per_sec": 0 00:15:58.922 }, 00:15:58.922 "claimed": false, 00:15:58.922 "zoned": false, 00:15:58.922 "supported_io_types": { 00:15:58.922 "read": true, 00:15:58.922 "write": true, 00:15:58.922 "unmap": true, 00:15:58.922 "write_zeroes": true, 00:15:58.922 "flush": true, 00:15:58.922 "reset": true, 00:15:58.922 "compare": true, 00:15:58.922 "compare_and_write": true, 00:15:58.922 "abort": true, 00:15:58.922 "nvme_admin": true, 00:15:58.922 "nvme_io": true 00:15:58.922 }, 00:15:58.922 "memory_domains": [ 00:15:58.922 { 00:15:58.922 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:15:58.922 "dma_device_type": 0 00:15:58.922 } 00:15:58.922 ], 00:15:58.922 "driver_specific": { 00:15:58.922 "nvme": [ 00:15:58.922 { 00:15:58.922 "trid": { 00:15:58.923 "trtype": "RDMA", 00:15:58.923 "adrfam": "IPv4", 00:15:58.923 "traddr": "192.168.100.8", 00:15:58.923 "trsvcid": "4420", 00:15:58.923 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:58.923 }, 00:15:58.923 "ctrlr_data": { 00:15:58.923 "cntlid": 1, 00:15:58.923 "vendor_id": "0x8086", 00:15:58.923 "model_number": "SPDK bdev Controller", 00:15:58.923 "serial_number": "SPDK0", 00:15:58.923 "firmware_revision": "24.09", 00:15:58.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:58.923 "oacs": { 00:15:58.923 "security": 0, 00:15:58.923 "format": 0, 00:15:58.923 "firmware": 0, 00:15:58.923 "ns_manage": 0 00:15:58.923 }, 00:15:58.923 "multi_ctrlr": true, 00:15:58.923 "ana_reporting": false 00:15:58.923 }, 00:15:58.923 "vs": { 00:15:58.923 "nvme_version": "1.3" 00:15:58.923 }, 00:15:58.923 "ns_data": { 00:15:58.923 "id": 1, 00:15:58.923 "can_share": true 00:15:58.923 } 00:15:58.923 } 00:15:58.923 ], 00:15:58.923 "mp_policy": "active_passive" 00:15:58.923 } 00:15:58.923 } 00:15:58.923 ] 00:15:58.923 10:43:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4046045 00:15:58.923 10:43:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:58.923 10:43:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.923 Running I/O for 10 seconds... 00:15:59.861 Latency(us) 00:15:59.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.861 Nvme0n1 : 1.00 34817.00 136.00 0.00 0.00 0.00 0.00 0.00 00:15:59.861 =================================================================================================================== 00:15:59.861 Total : 34817.00 136.00 0.00 0.00 0.00 0.00 0.00 00:15:59.861 00:16:00.800 10:43:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:01.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.060 Nvme0n1 : 2.00 35250.50 137.70 0.00 0.00 0.00 0.00 0.00 00:16:01.060 =================================================================================================================== 00:16:01.060 Total : 35250.50 137.70 0.00 0.00 0.00 0.00 0.00 00:16:01.060 00:16:01.060 true 00:16:01.060 10:43:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:01.060 10:43:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:01.319 10:43:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:01.319 10:43:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:01.319 10:43:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4046045 00:16:01.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.888 Nvme0n1 : 3.00 35316.67 137.96 0.00 0.00 0.00 0.00 0.00 00:16:01.888 =================================================================================================================== 00:16:01.888 Total : 35316.67 137.96 0.00 0.00 0.00 0.00 0.00 00:16:01.888 00:16:02.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.826 Nvme0n1 : 4.00 35456.00 138.50 0.00 0.00 0.00 0.00 0.00 00:16:02.826 =================================================================================================================== 00:16:02.826 Total : 35456.00 138.50 0.00 0.00 0.00 0.00 0.00 00:16:02.826 00:16:04.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.203 Nvme0n1 : 5.00 35545.20 138.85 0.00 0.00 0.00 0.00 0.00 00:16:04.203 =================================================================================================================== 00:16:04.203 Total : 35545.20 138.85 0.00 0.00 0.00 0.00 0.00 00:16:04.203 00:16:05.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.141 Nvme0n1 : 6.00 35610.50 139.10 0.00 0.00 0.00 0.00 0.00 00:16:05.141 =================================================================================================================== 00:16:05.141 Total : 35610.50 139.10 0.00 0.00 0.00 0.00 0.00 00:16:05.141 00:16:06.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.078 Nvme0n1 : 7.00 35657.43 139.29 0.00 0.00 0.00 0.00 0.00 00:16:06.078 =================================================================================================================== 00:16:06.078 Total : 35657.43 139.29 0.00 0.00 0.00 0.00 0.00 00:16:06.078 00:16:07.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.016 Nvme0n1 : 8.00 35695.88 139.44 0.00 0.00 0.00 0.00 0.00 00:16:07.016 =================================================================================================================== 00:16:07.016 Total : 35695.88 139.44 0.00 0.00 0.00 0.00 0.00 00:16:07.016 00:16:07.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.955 Nvme0n1 : 9.00 35726.33 139.56 0.00 0.00 0.00 0.00 0.00 00:16:07.955 =================================================================================================================== 00:16:07.955 Total : 35726.33 139.56 0.00 0.00 0.00 0.00 0.00 00:16:07.955 00:16:08.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.893 Nvme0n1 : 10.00 35747.10 139.64 0.00 0.00 0.00 0.00 0.00 00:16:08.893 =================================================================================================================== 00:16:08.893 Total : 35747.10 139.64 0.00 0.00 0.00 0.00 0.00 00:16:08.893 00:16:08.893 00:16:08.893 Latency(us) 00:16:08.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.893 Nvme0n1 : 10.00 35744.76 139.63 0.00 0.00 3577.99 2137.72 18849.40 00:16:08.893 =================================================================================================================== 00:16:08.893 Total : 35744.76 139.63 0.00 0.00 3577.99 2137.72 18849.40 00:16:08.893 0 00:16:08.893 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4045817 00:16:08.893 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 4045817 ']' 00:16:08.893 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 4045817 00:16:08.893 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:16:08.893 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:08.893 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4045817 00:16:09.153 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:09.153 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:09.153 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4045817' 00:16:09.153 killing process with pid 4045817 00:16:09.153 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 4045817 00:16:09.153 Received shutdown signal, test time was about 10.000000 seconds 00:16:09.153 00:16:09.153 Latency(us) 00:16:09.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.153 =================================================================================================================== 00:16:09.153 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:09.153 10:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 4045817 00:16:09.153 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:09.413 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:09.674 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:09.674 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:09.674 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:09.674 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:09.674 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:09.934 [2024-06-10 10:43:38.817358] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:16:09.934 10:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:10.195 request: 00:16:10.195 { 00:16:10.195 "uuid": "aecfe403-472b-44f6-9f2f-186a1fafcb80", 00:16:10.195 "method": "bdev_lvol_get_lvstores", 00:16:10.195 "req_id": 1 00:16:10.195 } 00:16:10.195 Got JSON-RPC error response 00:16:10.195 response: 00:16:10.195 { 00:16:10.195 "code": -19, 00:16:10.195 "message": "No such device" 00:16:10.195 } 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:10.195 aio_bdev 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 96646f51-a4d0-43d5-ac31-3a91bb20c923 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=96646f51-a4d0-43d5-ac31-3a91bb20c923 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:10.195 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:10.547 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 96646f51-a4d0-43d5-ac31-3a91bb20c923 -t 2000 00:16:10.547 [ 00:16:10.547 { 00:16:10.547 "name": "96646f51-a4d0-43d5-ac31-3a91bb20c923", 00:16:10.547 "aliases": [ 00:16:10.547 "lvs/lvol" 00:16:10.547 ], 00:16:10.547 "product_name": "Logical Volume", 00:16:10.547 "block_size": 4096, 00:16:10.547 "num_blocks": 38912, 00:16:10.547 "uuid": "96646f51-a4d0-43d5-ac31-3a91bb20c923", 00:16:10.547 "assigned_rate_limits": { 00:16:10.547 "rw_ios_per_sec": 0, 00:16:10.547 "rw_mbytes_per_sec": 0, 00:16:10.547 "r_mbytes_per_sec": 0, 00:16:10.547 "w_mbytes_per_sec": 0 00:16:10.547 }, 00:16:10.547 "claimed": false, 00:16:10.547 "zoned": false, 00:16:10.547 "supported_io_types": { 00:16:10.547 "read": true, 00:16:10.547 "write": true, 00:16:10.547 "unmap": true, 00:16:10.547 "write_zeroes": true, 00:16:10.547 "flush": false, 00:16:10.547 "reset": true, 00:16:10.547 "compare": false, 00:16:10.547 "compare_and_write": false, 00:16:10.547 "abort": false, 00:16:10.547 "nvme_admin": false, 00:16:10.547 "nvme_io": false 00:16:10.547 }, 00:16:10.547 "driver_specific": { 00:16:10.547 "lvol": { 00:16:10.547 "lvol_store_uuid": "aecfe403-472b-44f6-9f2f-186a1fafcb80", 00:16:10.547 "base_bdev": "aio_bdev", 00:16:10.547 "thin_provision": false, 00:16:10.547 "num_allocated_clusters": 38, 00:16:10.547 "snapshot": false, 00:16:10.547 "clone": false, 00:16:10.547 "esnap_clone": false 00:16:10.547 } 00:16:10.547 } 00:16:10.547 } 00:16:10.547 ] 00:16:10.547 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:16:10.548 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:10.548 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:10.809 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:10.809 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:10.809 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:11.069 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:11.069 10:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96646f51-a4d0-43d5-ac31-3a91bb20c923 00:16:11.069 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aecfe403-472b-44f6-9f2f-186a1fafcb80 00:16:11.327 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:11.587 00:16:11.587 real 0m15.563s 00:16:11.587 user 0m15.650s 00:16:11.587 sys 0m1.005s 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:11.587 ************************************ 00:16:11.587 END TEST lvs_grow_clean 00:16:11.587 ************************************ 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:11.587 ************************************ 00:16:11.587 START TEST lvs_grow_dirty 00:16:11.587 ************************************ 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:11.587 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:11.846 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:11.846 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:11.846 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:11.846 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:11.846 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:12.105 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:12.106 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:12.106 10:43:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c lvol 150 00:16:12.364 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1ad8be70-d768-4473-9606-21554cd2645d 00:16:12.364 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:12.364 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:12.364 [2024-06-10 10:43:41.316568] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:12.364 [2024-06-10 10:43:41.316616] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:12.364 true 00:16:12.364 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:12.364 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:12.623 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:12.623 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:12.623 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1ad8be70-d768-4473-9606-21554cd2645d 00:16:12.883 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:13.142 [2024-06-10 10:43:41.978485] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:13.142 10:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4048376 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4048376 /var/tmp/bdevperf.sock 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 4048376 ']' 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:13.400 10:43:42 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:13.400 [2024-06-10 10:43:42.218376] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:13.401 [2024-06-10 10:43:42.218417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4048376 ] 00:16:13.401 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.401 [2024-06-10 10:43:42.277718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.401 [2024-06-10 10:43:42.359710] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.339 10:43:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:14.339 10:43:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:14.339 10:43:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:14.339 Nvme0n1 00:16:14.339 10:43:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:14.598 [ 00:16:14.598 { 00:16:14.598 "name": "Nvme0n1", 00:16:14.598 "aliases": [ 00:16:14.598 "1ad8be70-d768-4473-9606-21554cd2645d" 00:16:14.598 ], 00:16:14.598 "product_name": "NVMe disk", 00:16:14.598 "block_size": 4096, 00:16:14.598 "num_blocks": 38912, 00:16:14.598 "uuid": "1ad8be70-d768-4473-9606-21554cd2645d", 00:16:14.598 "assigned_rate_limits": { 00:16:14.598 "rw_ios_per_sec": 0, 00:16:14.598 "rw_mbytes_per_sec": 0, 00:16:14.598 "r_mbytes_per_sec": 0, 00:16:14.598 "w_mbytes_per_sec": 0 00:16:14.598 }, 00:16:14.598 "claimed": false, 00:16:14.598 "zoned": false, 00:16:14.598 "supported_io_types": { 00:16:14.598 "read": true, 00:16:14.598 "write": true, 00:16:14.598 "unmap": true, 00:16:14.598 "write_zeroes": true, 00:16:14.598 "flush": true, 00:16:14.598 "reset": true, 00:16:14.598 "compare": true, 00:16:14.598 "compare_and_write": true, 00:16:14.598 "abort": true, 00:16:14.598 "nvme_admin": true, 00:16:14.598 "nvme_io": true 00:16:14.598 }, 00:16:14.598 "memory_domains": [ 00:16:14.598 { 00:16:14.598 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:14.598 "dma_device_type": 0 00:16:14.598 } 00:16:14.598 ], 00:16:14.598 "driver_specific": { 00:16:14.598 "nvme": [ 00:16:14.598 { 00:16:14.598 "trid": { 00:16:14.598 "trtype": "RDMA", 00:16:14.598 "adrfam": "IPv4", 00:16:14.598 "traddr": "192.168.100.8", 00:16:14.598 "trsvcid": "4420", 00:16:14.598 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:14.598 }, 00:16:14.598 "ctrlr_data": { 00:16:14.598 "cntlid": 1, 00:16:14.598 "vendor_id": "0x8086", 00:16:14.598 "model_number": "SPDK bdev Controller", 00:16:14.598 "serial_number": "SPDK0", 00:16:14.598 "firmware_revision": "24.09", 00:16:14.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:14.598 "oacs": { 00:16:14.598 "security": 0, 00:16:14.598 "format": 0, 00:16:14.598 "firmware": 0, 00:16:14.598 "ns_manage": 0 00:16:14.598 }, 00:16:14.598 "multi_ctrlr": true, 00:16:14.598 "ana_reporting": false 00:16:14.598 }, 00:16:14.598 "vs": { 00:16:14.598 "nvme_version": "1.3" 00:16:14.598 }, 00:16:14.598 "ns_data": { 00:16:14.598 "id": 1, 00:16:14.598 "can_share": true 00:16:14.598 } 00:16:14.598 } 00:16:14.598 ], 00:16:14.598 "mp_policy": "active_passive" 00:16:14.598 } 00:16:14.598 } 00:16:14.598 ] 00:16:14.598 10:43:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4048601 00:16:14.598 10:43:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:14.598 10:43:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:14.598 Running I/O for 10 seconds... 00:16:15.533 Latency(us) 00:16:15.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:15.533 Nvme0n1 : 1.00 34940.00 136.48 0.00 0.00 0.00 0.00 0.00 00:16:15.533 =================================================================================================================== 00:16:15.533 Total : 34940.00 136.48 0.00 0.00 0.00 0.00 0.00 00:16:15.533 00:16:16.469 10:43:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:16.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:16.728 Nvme0n1 : 2.00 35267.00 137.76 0.00 0.00 0.00 0.00 0.00 00:16:16.728 =================================================================================================================== 00:16:16.728 Total : 35267.00 137.76 0.00 0.00 0.00 0.00 0.00 00:16:16.728 00:16:16.728 true 00:16:16.728 10:43:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:16.728 10:43:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:16.986 10:43:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:16.986 10:43:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:16.986 10:43:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4048601 00:16:17.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:17.553 Nvme0n1 : 3.00 35368.33 138.16 0.00 0.00 0.00 0.00 0.00 00:16:17.553 =================================================================================================================== 00:16:17.553 Total : 35368.33 138.16 0.00 0.00 0.00 0.00 0.00 00:16:17.553 00:16:18.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:18.489 Nvme0n1 : 4.00 35481.75 138.60 0.00 0.00 0.00 0.00 0.00 00:16:18.489 =================================================================================================================== 00:16:18.489 Total : 35481.75 138.60 0.00 0.00 0.00 0.00 0.00 00:16:18.489 00:16:19.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.866 Nvme0n1 : 5.00 35566.00 138.93 0.00 0.00 0.00 0.00 0.00 00:16:19.866 =================================================================================================================== 00:16:19.866 Total : 35566.00 138.93 0.00 0.00 0.00 0.00 0.00 00:16:19.866 00:16:20.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.802 Nvme0n1 : 6.00 35631.17 139.18 0.00 0.00 0.00 0.00 0.00 00:16:20.802 =================================================================================================================== 00:16:20.803 Total : 35631.17 139.18 0.00 0.00 0.00 0.00 0.00 00:16:20.803 00:16:21.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.738 Nvme0n1 : 7.00 35666.71 139.32 0.00 0.00 0.00 0.00 0.00 00:16:21.738 =================================================================================================================== 00:16:21.738 Total : 35666.71 139.32 0.00 0.00 0.00 0.00 0.00 00:16:21.738 00:16:22.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.672 Nvme0n1 : 8.00 35703.50 139.47 0.00 0.00 0.00 0.00 0.00 00:16:22.672 =================================================================================================================== 00:16:22.672 Total : 35703.50 139.47 0.00 0.00 0.00 0.00 0.00 00:16:22.672 00:16:23.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.605 Nvme0n1 : 9.00 35723.00 139.54 0.00 0.00 0.00 0.00 0.00 00:16:23.605 =================================================================================================================== 00:16:23.605 Total : 35723.00 139.54 0.00 0.00 0.00 0.00 0.00 00:16:23.605 00:16:24.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.542 Nvme0n1 : 10.00 35746.60 139.64 0.00 0.00 0.00 0.00 0.00 00:16:24.542 =================================================================================================================== 00:16:24.542 Total : 35746.60 139.64 0.00 0.00 0.00 0.00 0.00 00:16:24.542 00:16:24.542 00:16:24.542 Latency(us) 00:16:24.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.542 Nvme0n1 : 10.00 35746.69 139.64 0.00 0.00 3577.77 2481.01 13044.78 00:16:24.542 =================================================================================================================== 00:16:24.542 Total : 35746.69 139.64 0.00 0.00 3577.77 2481.01 13044.78 00:16:24.542 0 00:16:24.542 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4048376 00:16:24.542 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 4048376 ']' 00:16:24.542 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 4048376 00:16:24.542 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:16:24.542 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:24.542 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4048376 00:16:24.801 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:24.801 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:24.801 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4048376' 00:16:24.801 killing process with pid 4048376 00:16:24.801 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 4048376 00:16:24.801 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.801 00:16:24.801 Latency(us) 00:16:24.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.801 =================================================================================================================== 00:16:24.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.801 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 4048376 00:16:24.801 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:25.059 10:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4045317 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4045317 00:16:25.317 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4045317 Killed "${NVMF_APP[@]}" "$@" 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4050421 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4050421 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 4050421 ']' 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:25.317 10:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:25.317 [2024-06-10 10:43:54.338963] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:25.317 [2024-06-10 10:43:54.339009] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.579 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.579 [2024-06-10 10:43:54.399715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.579 [2024-06-10 10:43:54.475868] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.579 [2024-06-10 10:43:54.475903] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.579 [2024-06-10 10:43:54.475910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.579 [2024-06-10 10:43:54.475916] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.579 [2024-06-10 10:43:54.475921] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.579 [2024-06-10 10:43:54.475954] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.180 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:26.180 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:16:26.180 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.180 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:26.180 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:26.180 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.180 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:26.439 [2024-06-10 10:43:55.312048] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:26.439 [2024-06-10 10:43:55.312135] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:26.439 [2024-06-10 10:43:55.312159] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:26.439 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:26.439 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1ad8be70-d768-4473-9606-21554cd2645d 00:16:26.439 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=1ad8be70-d768-4473-9606-21554cd2645d 00:16:26.439 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:26.439 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:16:26.439 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:26.439 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:26.439 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:26.697 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1ad8be70-d768-4473-9606-21554cd2645d -t 2000 00:16:26.697 [ 00:16:26.697 { 00:16:26.697 "name": "1ad8be70-d768-4473-9606-21554cd2645d", 00:16:26.697 "aliases": [ 00:16:26.697 "lvs/lvol" 00:16:26.697 ], 00:16:26.697 "product_name": "Logical Volume", 00:16:26.697 "block_size": 4096, 00:16:26.697 "num_blocks": 38912, 00:16:26.697 "uuid": "1ad8be70-d768-4473-9606-21554cd2645d", 00:16:26.697 "assigned_rate_limits": { 00:16:26.697 "rw_ios_per_sec": 0, 00:16:26.697 "rw_mbytes_per_sec": 0, 00:16:26.698 "r_mbytes_per_sec": 0, 00:16:26.698 "w_mbytes_per_sec": 0 00:16:26.698 }, 00:16:26.698 "claimed": false, 00:16:26.698 "zoned": false, 00:16:26.698 "supported_io_types": { 00:16:26.698 "read": true, 00:16:26.698 "write": true, 00:16:26.698 "unmap": true, 00:16:26.698 "write_zeroes": true, 00:16:26.698 "flush": false, 00:16:26.698 "reset": true, 00:16:26.698 "compare": false, 00:16:26.698 "compare_and_write": false, 00:16:26.698 "abort": false, 00:16:26.698 "nvme_admin": false, 00:16:26.698 "nvme_io": false 00:16:26.698 }, 00:16:26.698 "driver_specific": { 00:16:26.698 "lvol": { 00:16:26.698 "lvol_store_uuid": "a132bfd1-eda3-46c5-a31e-20a23f4fe83c", 00:16:26.698 "base_bdev": "aio_bdev", 00:16:26.698 "thin_provision": false, 00:16:26.698 "num_allocated_clusters": 38, 00:16:26.698 "snapshot": false, 00:16:26.698 "clone": false, 00:16:26.698 "esnap_clone": false 00:16:26.698 } 00:16:26.698 } 00:16:26.698 } 00:16:26.698 ] 00:16:26.698 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:16:26.698 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:26.698 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:26.956 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:26.956 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:26.956 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:26.956 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:26.956 10:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:27.215 [2024-06-10 10:43:56.124374] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:16:27.215 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:27.474 request: 00:16:27.474 { 00:16:27.474 "uuid": "a132bfd1-eda3-46c5-a31e-20a23f4fe83c", 00:16:27.474 "method": "bdev_lvol_get_lvstores", 00:16:27.474 "req_id": 1 00:16:27.474 } 00:16:27.474 Got JSON-RPC error response 00:16:27.474 response: 00:16:27.474 { 00:16:27.474 "code": -19, 00:16:27.474 "message": "No such device" 00:16:27.474 } 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:27.474 aio_bdev 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1ad8be70-d768-4473-9606-21554cd2645d 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=1ad8be70-d768-4473-9606-21554cd2645d 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:27.474 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:27.733 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1ad8be70-d768-4473-9606-21554cd2645d -t 2000 00:16:27.992 [ 00:16:27.992 { 00:16:27.992 "name": "1ad8be70-d768-4473-9606-21554cd2645d", 00:16:27.992 "aliases": [ 00:16:27.992 "lvs/lvol" 00:16:27.992 ], 00:16:27.992 "product_name": "Logical Volume", 00:16:27.992 "block_size": 4096, 00:16:27.992 "num_blocks": 38912, 00:16:27.992 "uuid": "1ad8be70-d768-4473-9606-21554cd2645d", 00:16:27.992 "assigned_rate_limits": { 00:16:27.992 "rw_ios_per_sec": 0, 00:16:27.992 "rw_mbytes_per_sec": 0, 00:16:27.992 "r_mbytes_per_sec": 0, 00:16:27.992 "w_mbytes_per_sec": 0 00:16:27.992 }, 00:16:27.992 "claimed": false, 00:16:27.992 "zoned": false, 00:16:27.992 "supported_io_types": { 00:16:27.992 "read": true, 00:16:27.992 "write": true, 00:16:27.992 "unmap": true, 00:16:27.992 "write_zeroes": true, 00:16:27.992 "flush": false, 00:16:27.992 "reset": true, 00:16:27.992 "compare": false, 00:16:27.992 "compare_and_write": false, 00:16:27.992 "abort": false, 00:16:27.992 "nvme_admin": false, 00:16:27.992 "nvme_io": false 00:16:27.992 }, 00:16:27.992 "driver_specific": { 00:16:27.992 "lvol": { 00:16:27.992 "lvol_store_uuid": "a132bfd1-eda3-46c5-a31e-20a23f4fe83c", 00:16:27.992 "base_bdev": "aio_bdev", 00:16:27.992 "thin_provision": false, 00:16:27.992 "num_allocated_clusters": 38, 00:16:27.992 "snapshot": false, 00:16:27.992 "clone": false, 00:16:27.992 "esnap_clone": false 00:16:27.992 } 00:16:27.992 } 00:16:27.992 } 00:16:27.992 ] 00:16:27.992 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:16:27.992 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:27.992 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:27.992 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:27.992 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:27.992 10:43:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:28.251 10:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:28.251 10:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1ad8be70-d768-4473-9606-21554cd2645d 00:16:28.251 10:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a132bfd1-eda3-46c5-a31e-20a23f4fe83c 00:16:28.509 10:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:28.767 00:16:28.767 real 0m17.153s 00:16:28.767 user 0m45.254s 00:16:28.767 sys 0m2.830s 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:28.767 ************************************ 00:16:28.767 END TEST lvs_grow_dirty 00:16:28.767 ************************************ 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:28.767 nvmf_trace.0 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:28.767 rmmod nvme_rdma 00:16:28.767 rmmod nvme_fabrics 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4050421 ']' 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4050421 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 4050421 ']' 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 4050421 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4050421 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:28.767 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4050421' 00:16:28.768 killing process with pid 4050421 00:16:28.768 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 4050421 00:16:28.768 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 4050421 00:16:29.026 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.027 10:43:57 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:29.027 00:16:29.027 real 0m40.134s 00:16:29.027 user 1m6.608s 00:16:29.027 sys 0m8.570s 00:16:29.027 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:29.027 10:43:57 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:29.027 ************************************ 00:16:29.027 END TEST nvmf_lvs_grow 00:16:29.027 ************************************ 00:16:29.027 10:43:57 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:16:29.027 10:43:57 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:29.027 10:43:57 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:29.027 10:43:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:29.027 ************************************ 00:16:29.027 START TEST nvmf_bdev_io_wait 00:16:29.027 ************************************ 00:16:29.027 10:43:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:16:29.286 * Looking for test storage... 00:16:29.286 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.286 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.287 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.287 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.287 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:29.287 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:29.287 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:29.287 10:43:58 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:35.849 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:35.849 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # modinfo irdma 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:35.849 Found net devices under 0000:af:00.0: cvl_0_0 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:35.849 Found net devices under 0000:af:00.1: cvl_0_1 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:35.849 10:44:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:35.849 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:16:35.850 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:35.850 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:35.850 altname enp175s0f0np0 00:16:35.850 altname ens801f0np0 00:16:35.850 inet 192.168.100.8/24 scope global cvl_0_0 00:16:35.850 valid_lft forever preferred_lft forever 00:16:35.850 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:35.850 valid_lft forever preferred_lft forever 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:16:35.850 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:35.850 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:35.850 altname enp175s0f1np1 00:16:35.850 altname ens801f1np1 00:16:35.850 inet 192.168.100.9/24 scope global cvl_0_1 00:16:35.850 valid_lft forever preferred_lft forever 00:16:35.850 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:35.850 valid_lft forever preferred_lft forever 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:35.850 192.168.100.9' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:35.850 192.168.100.9' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:35.850 192.168.100.9' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4054484 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4054484 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 4054484 ']' 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:35.850 10:44:04 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:35.850 [2024-06-10 10:44:04.214901] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:35.851 [2024-06-10 10:44:04.214950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.851 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.851 [2024-06-10 10:44:04.275445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.851 [2024-06-10 10:44:04.353204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.851 [2024-06-10 10:44:04.353251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.851 [2024-06-10 10:44:04.353258] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.851 [2024-06-10 10:44:04.353264] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.851 [2024-06-10 10:44:04.353268] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.851 [2024-06-10 10:44:04.353326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.851 [2024-06-10 10:44:04.353419] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.851 [2024-06-10 10:44:04.353437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.851 [2024-06-10 10:44:04.353441] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.110 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:36.369 [2024-06-10 10:44:05.144050] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x813910/0x812f50) succeed. 00:16:36.369 [2024-06-10 10:44:05.152678] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x814c80/0x8134d0) succeed. 00:16:36.369 [2024-06-10 10:44:05.152699] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:36.369 Malloc0 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:36.369 [2024-06-10 10:44:05.210932] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4054728 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4054730 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:36.369 { 00:16:36.369 "params": { 00:16:36.369 "name": "Nvme$subsystem", 00:16:36.369 "trtype": "$TEST_TRANSPORT", 00:16:36.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.369 "adrfam": "ipv4", 00:16:36.369 "trsvcid": "$NVMF_PORT", 00:16:36.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.369 "hdgst": ${hdgst:-false}, 00:16:36.369 "ddgst": ${ddgst:-false} 00:16:36.369 }, 00:16:36.369 "method": "bdev_nvme_attach_controller" 00:16:36.369 } 00:16:36.369 EOF 00:16:36.369 )") 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4054732 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:36.369 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:36.369 { 00:16:36.369 "params": { 00:16:36.369 "name": "Nvme$subsystem", 00:16:36.369 "trtype": "$TEST_TRANSPORT", 00:16:36.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.369 "adrfam": "ipv4", 00:16:36.370 "trsvcid": "$NVMF_PORT", 00:16:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.370 "hdgst": ${hdgst:-false}, 00:16:36.370 "ddgst": ${ddgst:-false} 00:16:36.370 }, 00:16:36.370 "method": "bdev_nvme_attach_controller" 00:16:36.370 } 00:16:36.370 EOF 00:16:36.370 )") 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4054735 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:36.370 { 00:16:36.370 "params": { 00:16:36.370 "name": "Nvme$subsystem", 00:16:36.370 "trtype": "$TEST_TRANSPORT", 00:16:36.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.370 "adrfam": "ipv4", 00:16:36.370 "trsvcid": "$NVMF_PORT", 00:16:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.370 "hdgst": ${hdgst:-false}, 00:16:36.370 "ddgst": ${ddgst:-false} 00:16:36.370 }, 00:16:36.370 "method": "bdev_nvme_attach_controller" 00:16:36.370 } 00:16:36.370 EOF 00:16:36.370 )") 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:36.370 { 00:16:36.370 "params": { 00:16:36.370 "name": "Nvme$subsystem", 00:16:36.370 "trtype": "$TEST_TRANSPORT", 00:16:36.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.370 "adrfam": "ipv4", 00:16:36.370 "trsvcid": "$NVMF_PORT", 00:16:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.370 "hdgst": ${hdgst:-false}, 00:16:36.370 "ddgst": ${ddgst:-false} 00:16:36.370 }, 00:16:36.370 "method": "bdev_nvme_attach_controller" 00:16:36.370 } 00:16:36.370 EOF 00:16:36.370 )") 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4054728 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:36.370 "params": { 00:16:36.370 "name": "Nvme1", 00:16:36.370 "trtype": "rdma", 00:16:36.370 "traddr": "192.168.100.8", 00:16:36.370 "adrfam": "ipv4", 00:16:36.370 "trsvcid": "4420", 00:16:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.370 "hdgst": false, 00:16:36.370 "ddgst": false 00:16:36.370 }, 00:16:36.370 "method": "bdev_nvme_attach_controller" 00:16:36.370 }' 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:36.370 "params": { 00:16:36.370 "name": "Nvme1", 00:16:36.370 "trtype": "rdma", 00:16:36.370 "traddr": "192.168.100.8", 00:16:36.370 "adrfam": "ipv4", 00:16:36.370 "trsvcid": "4420", 00:16:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.370 "hdgst": false, 00:16:36.370 "ddgst": false 00:16:36.370 }, 00:16:36.370 "method": "bdev_nvme_attach_controller" 00:16:36.370 }' 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:36.370 "params": { 00:16:36.370 "name": "Nvme1", 00:16:36.370 "trtype": "rdma", 00:16:36.370 "traddr": "192.168.100.8", 00:16:36.370 "adrfam": "ipv4", 00:16:36.370 "trsvcid": "4420", 00:16:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.370 "hdgst": false, 00:16:36.370 "ddgst": false 00:16:36.370 }, 00:16:36.370 "method": "bdev_nvme_attach_controller" 00:16:36.370 }' 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:36.370 10:44:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:36.370 "params": { 00:16:36.370 "name": "Nvme1", 00:16:36.370 "trtype": "rdma", 00:16:36.370 "traddr": "192.168.100.8", 00:16:36.370 "adrfam": "ipv4", 00:16:36.370 "trsvcid": "4420", 00:16:36.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.370 "hdgst": false, 00:16:36.370 "ddgst": false 00:16:36.370 }, 00:16:36.370 "method": "bdev_nvme_attach_controller" 00:16:36.370 }' 00:16:36.370 [2024-06-10 10:44:05.259995] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:36.370 [2024-06-10 10:44:05.259994] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:36.370 [2024-06-10 10:44:05.260044] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-06-10 10:44:05.260045] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:36.370 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:36.370 [2024-06-10 10:44:05.260081] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:36.370 [2024-06-10 10:44:05.260115] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:36.370 [2024-06-10 10:44:05.263949] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:36.370 [2024-06-10 10:44:05.263996] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:36.370 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.629 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.629 [2024-06-10 10:44:05.448133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.629 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.629 [2024-06-10 10:44:05.528705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:16:36.629 [2024-06-10 10:44:05.540320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.629 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.629 [2024-06-10 10:44:05.618924] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:16:36.629 [2024-06-10 10:44:05.639297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.888 [2024-06-10 10:44:05.681454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.888 [2024-06-10 10:44:05.733513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:16:36.888 [2024-06-10 10:44:05.760217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:16:36.888 Running I/O for 1 seconds... 00:16:36.888 Running I/O for 1 seconds... 00:16:36.888 Running I/O for 1 seconds... 00:16:36.888 Running I/O for 1 seconds... 00:16:37.824 00:16:37.824 Latency(us) 00:16:37.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.824 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:37.824 Nvme1n1 : 1.01 16465.92 64.32 0.00 0.00 7749.12 4618.73 18100.42 00:16:37.824 =================================================================================================================== 00:16:37.824 Total : 16465.92 64.32 0.00 0.00 7749.12 4618.73 18100.42 00:16:37.824 00:16:37.824 Latency(us) 00:16:37.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.824 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:37.824 Nvme1n1 : 1.01 15120.17 59.06 0.00 0.00 8439.44 5274.09 23842.62 00:16:37.824 =================================================================================================================== 00:16:37.824 Total : 15120.17 59.06 0.00 0.00 8439.44 5274.09 23842.62 00:16:38.083 00:16:38.083 Latency(us) 00:16:38.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.083 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:38.083 Nvme1n1 : 1.00 17918.09 69.99 0.00 0.00 7127.73 3464.05 19473.55 00:16:38.083 =================================================================================================================== 00:16:38.083 Total : 17918.09 69.99 0.00 0.00 7127.73 3464.05 19473.55 00:16:38.083 00:16:38.083 Latency(us) 00:16:38.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.084 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:38.084 Nvme1n1 : 1.00 253970.79 992.07 0.00 0.00 501.99 204.80 1927.07 00:16:38.084 =================================================================================================================== 00:16:38.084 Total : 253970.79 992.07 0.00 0.00 501.99 204.80 1927.07 00:16:38.084 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4054730 00:16:38.084 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4054732 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4054735 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:38.343 rmmod nvme_rdma 00:16:38.343 rmmod nvme_fabrics 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4054484 ']' 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4054484 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 4054484 ']' 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 4054484 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4054484 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4054484' 00:16:38.343 killing process with pid 4054484 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 4054484 00:16:38.343 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 4054484 00:16:38.601 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.601 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:38.601 00:16:38.601 real 0m9.553s 00:16:38.601 user 0m20.320s 00:16:38.601 sys 0m5.814s 00:16:38.601 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:38.601 10:44:07 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:38.601 ************************************ 00:16:38.601 END TEST nvmf_bdev_io_wait 00:16:38.601 ************************************ 00:16:38.601 10:44:07 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:16:38.601 10:44:07 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:38.601 10:44:07 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:38.601 10:44:07 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:38.601 ************************************ 00:16:38.601 START TEST nvmf_queue_depth 00:16:38.601 ************************************ 00:16:38.601 10:44:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:16:38.860 * Looking for test storage... 00:16:38.860 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.860 10:44:07 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.861 10:44:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:45.445 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:45.445 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@377 -- # modinfo irdma 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:45.445 Found net devices under 0000:af:00.0: cvl_0_0 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:45.445 Found net devices under 0000:af:00.1: cvl_0_1 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:45.445 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:16:45.446 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:45.446 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:45.446 altname enp175s0f0np0 00:16:45.446 altname ens801f0np0 00:16:45.446 inet 192.168.100.8/24 scope global cvl_0_0 00:16:45.446 valid_lft forever preferred_lft forever 00:16:45.446 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:45.446 valid_lft forever preferred_lft forever 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:16:45.446 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:45.446 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:45.446 altname enp175s0f1np1 00:16:45.446 altname ens801f1np1 00:16:45.446 inet 192.168.100.9/24 scope global cvl_0_1 00:16:45.446 valid_lft forever preferred_lft forever 00:16:45.446 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:45.446 valid_lft forever preferred_lft forever 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:45.446 192.168.100.9' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:45.446 192.168.100.9' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:45.446 192.168.100.9' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4058541 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4058541 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 4058541 ']' 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:45.446 10:44:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.446 [2024-06-10 10:44:13.579299] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:45.446 [2024-06-10 10:44:13.579343] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.446 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.446 [2024-06-10 10:44:13.638732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.446 [2024-06-10 10:44:13.715657] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.446 [2024-06-10 10:44:13.715694] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.446 [2024-06-10 10:44:13.715701] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.446 [2024-06-10 10:44:13.715706] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.446 [2024-06-10 10:44:13.715711] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.446 [2024-06-10 10:44:13.715743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.446 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.446 [2024-06-10 10:44:14.425909] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1213af0/0x1213130) succeed. 00:16:45.447 [2024-06-10 10:44:14.434271] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1214da0/0x12136b0) succeed. 00:16:45.447 [2024-06-10 10:44:14.434293] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:16:45.447 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.447 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:45.447 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.447 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.447 Malloc0 00:16:45.447 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.447 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:45.447 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.705 [2024-06-10 10:44:14.496203] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4058599 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4058599 /var/tmp/bdevperf.sock 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 4058599 ']' 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:45.705 10:44:14 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.705 [2024-06-10 10:44:14.541219] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:45.705 [2024-06-10 10:44:14.541257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058599 ] 00:16:45.705 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.705 [2024-06-10 10:44:14.600160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.705 [2024-06-10 10:44:14.678506] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.642 10:44:15 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:46.642 10:44:15 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:46.642 10:44:15 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:46.642 10:44:15 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:46.642 10:44:15 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:46.642 NVMe0n1 00:16:46.642 10:44:15 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:46.642 10:44:15 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:46.642 Running I/O for 10 seconds... 00:16:56.682 00:16:56.682 Latency(us) 00:16:56.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.682 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:56.682 Verification LBA range: start 0x0 length 0x4000 00:16:56.682 NVMe0n1 : 10.05 17310.75 67.62 0.00 0.00 58994.05 19598.38 38947.11 00:16:56.682 =================================================================================================================== 00:16:56.682 Total : 17310.75 67.62 0.00 0.00 58994.05 19598.38 38947.11 00:16:56.682 0 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4058599 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 4058599 ']' 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 4058599 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4058599 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4058599' 00:16:56.682 killing process with pid 4058599 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 4058599 00:16:56.682 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.682 00:16:56.682 Latency(us) 00:16:56.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.682 =================================================================================================================== 00:16:56.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.682 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 4058599 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:56.941 rmmod nvme_rdma 00:16:56.941 rmmod nvme_fabrics 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4058541 ']' 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4058541 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 4058541 ']' 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 4058541 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4058541 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4058541' 00:16:56.941 killing process with pid 4058541 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 4058541 00:16:56.941 10:44:25 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 4058541 00:16:57.201 10:44:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:57.201 10:44:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:57.201 00:16:57.201 real 0m18.537s 00:16:57.201 user 0m25.946s 00:16:57.201 sys 0m4.905s 00:16:57.201 10:44:26 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:57.201 10:44:26 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:57.201 ************************************ 00:16:57.201 END TEST nvmf_queue_depth 00:16:57.201 ************************************ 00:16:57.201 10:44:26 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:16:57.201 10:44:26 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:57.201 10:44:26 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:57.201 10:44:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:57.201 ************************************ 00:16:57.201 START TEST nvmf_target_multipath 00:16:57.201 ************************************ 00:16:57.201 10:44:26 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:16:57.461 * Looking for test storage... 00:16:57.461 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.461 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.462 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.462 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.462 10:44:26 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.462 10:44:26 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.462 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:57.462 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:57.462 10:44:26 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.462 10:44:26 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:04.032 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:04.032 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@377 -- # modinfo irdma 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:04.032 Found net devices under 0000:af:00.0: cvl_0_0 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:04.032 Found net devices under 0000:af:00.1: cvl_0_1 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:17:04.032 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:04.032 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:04.032 altname enp175s0f0np0 00:17:04.032 altname ens801f0np0 00:17:04.032 inet 192.168.100.8/24 scope global cvl_0_0 00:17:04.032 valid_lft forever preferred_lft forever 00:17:04.032 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:04.032 valid_lft forever preferred_lft forever 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:17:04.032 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:04.032 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:04.032 altname enp175s0f1np1 00:17:04.032 altname ens801f1np1 00:17:04.032 inet 192.168.100.9/24 scope global cvl_0_1 00:17:04.032 valid_lft forever preferred_lft forever 00:17:04.032 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:04.032 valid_lft forever preferred_lft forever 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:17:04.032 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:04.033 192.168.100.9' 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:04.033 192.168.100.9' 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:17:04.033 10:44:31 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:04.033 192.168.100.9' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:17:04.033 run this test only with TCP transport for now 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:04.033 rmmod nvme_rdma 00:17:04.033 rmmod nvme_fabrics 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:04.033 00:17:04.033 real 0m5.856s 00:17:04.033 user 0m1.610s 00:17:04.033 sys 0m4.339s 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:04.033 10:44:32 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:04.033 ************************************ 00:17:04.033 END TEST nvmf_target_multipath 00:17:04.033 ************************************ 00:17:04.033 10:44:32 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:17:04.033 10:44:32 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:04.033 10:44:32 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:04.033 10:44:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:04.033 ************************************ 00:17:04.033 START TEST nvmf_zcopy 00:17:04.033 ************************************ 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:17:04.033 * Looking for test storage... 00:17:04.033 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.033 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:04.034 10:44:32 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:09.304 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:09.304 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.304 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@377 -- # modinfo irdma 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:09.305 Found net devices under 0000:af:00.0: cvl_0_0 00:17:09.305 10:44:37 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:09.305 Found net devices under 0000:af:00.1: cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:17:09.305 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:09.305 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:09.305 altname enp175s0f0np0 00:17:09.305 altname ens801f0np0 00:17:09.305 inet 192.168.100.8/24 scope global cvl_0_0 00:17:09.305 valid_lft forever preferred_lft forever 00:17:09.305 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:09.305 valid_lft forever preferred_lft forever 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:17:09.305 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:09.305 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:09.305 altname enp175s0f1np1 00:17:09.305 altname ens801f1np1 00:17:09.305 inet 192.168.100.9/24 scope global cvl_0_1 00:17:09.305 valid_lft forever preferred_lft forever 00:17:09.305 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:09.305 valid_lft forever preferred_lft forever 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:09.305 192.168.100.9' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:09.305 192.168.100.9' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:09.305 192.168.100.9' 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:09.305 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4067365 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4067365 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 4067365 ']' 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:09.306 10:44:38 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:09.306 [2024-06-10 10:44:38.235042] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:09.306 [2024-06-10 10:44:38.235087] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.306 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.306 [2024-06-10 10:44:38.292927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.565 [2024-06-10 10:44:38.369564] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.565 [2024-06-10 10:44:38.369598] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.565 [2024-06-10 10:44:38.369605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.565 [2024-06-10 10:44:38.369611] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.565 [2024-06-10 10:44:38.369616] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.565 [2024-06-10 10:44:38.369632] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:17:10.133 Unsupported transport: rdma 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # type=--id 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # id=0 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # for n in $shm_files 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:10.133 nvmf_trace.0 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@822 -- # return 0 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:10.133 rmmod nvme_rdma 00:17:10.133 rmmod nvme_fabrics 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4067365 ']' 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4067365 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 4067365 ']' 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 4067365 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:10.133 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4067365 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4067365' 00:17:10.392 killing process with pid 4067365 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 4067365 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 4067365 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:10.392 00:17:10.392 real 0m7.214s 00:17:10.392 user 0m3.081s 00:17:10.392 sys 0m4.768s 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:10.392 10:44:39 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:10.392 ************************************ 00:17:10.392 END TEST nvmf_zcopy 00:17:10.392 ************************************ 00:17:10.392 10:44:39 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:17:10.392 10:44:39 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:10.392 10:44:39 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:10.392 10:44:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:10.651 ************************************ 00:17:10.651 START TEST nvmf_nmic 00:17:10.651 ************************************ 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:17:10.651 * Looking for test storage... 00:17:10.651 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.651 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.652 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.652 10:44:39 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.652 10:44:39 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.652 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.652 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.652 10:44:39 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.652 10:44:39 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.363 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:17.364 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:17.364 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@377 -- # modinfo irdma 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:17.364 Found net devices under 0000:af:00.0: cvl_0_0 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:17.364 Found net devices under 0000:af:00.1: cvl_0_1 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:17:17.364 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:17.364 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:17.364 altname enp175s0f0np0 00:17:17.364 altname ens801f0np0 00:17:17.364 inet 192.168.100.8/24 scope global cvl_0_0 00:17:17.364 valid_lft forever preferred_lft forever 00:17:17.364 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:17.364 valid_lft forever preferred_lft forever 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:17.364 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:17:17.364 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:17.364 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:17.364 altname enp175s0f1np1 00:17:17.364 altname ens801f1np1 00:17:17.364 inet 192.168.100.9/24 scope global cvl_0_1 00:17:17.364 valid_lft forever preferred_lft forever 00:17:17.364 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:17.365 valid_lft forever preferred_lft forever 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:17.365 192.168.100.9' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:17.365 192.168.100.9' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:17.365 192.168.100.9' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4071008 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4071008 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 4071008 ']' 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:17.365 10:44:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 [2024-06-10 10:44:45.376389] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:17.365 [2024-06-10 10:44:45.376435] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.365 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.365 [2024-06-10 10:44:45.435917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.365 [2024-06-10 10:44:45.515369] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.365 [2024-06-10 10:44:45.515406] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.365 [2024-06-10 10:44:45.515413] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.365 [2024-06-10 10:44:45.515419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.365 [2024-06-10 10:44:45.515425] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.365 [2024-06-10 10:44:45.515466] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.365 [2024-06-10 10:44:45.515562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.365 [2024-06-10 10:44:45.515649] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.365 [2024-06-10 10:44:45.515651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 [2024-06-10 10:44:46.234235] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x9c48f0/0x9c3f30) succeed. 00:17:17.365 [2024-06-10 10:44:46.243159] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x9c5ca0/0x9c44b0) succeed. 00:17:17.365 [2024-06-10 10:44:46.243180] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 Malloc0 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 [2024-06-10 10:44:46.297984] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:17.365 test case1: single bdev can't be used in multiple subsystems 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.365 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 [2024-06-10 10:44:46.322022] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:17.366 [2024-06-10 10:44:46.322039] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:17.366 [2024-06-10 10:44:46.322046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.366 request: 00:17:17.366 { 00:17:17.366 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.366 "namespace": { 00:17:17.366 "bdev_name": "Malloc0", 00:17:17.366 "no_auto_visible": false 00:17:17.366 }, 00:17:17.366 "method": "nvmf_subsystem_add_ns", 00:17:17.366 "req_id": 1 00:17:17.366 } 00:17:17.366 Got JSON-RPC error response 00:17:17.366 response: 00:17:17.366 { 00:17:17.366 "code": -32602, 00:17:17.366 "message": "Invalid parameters" 00:17:17.366 } 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:17.366 Adding namespace failed - expected result. 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:17.366 test case2: host connect to nvmf target in multiple paths 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:17.366 [2024-06-10 10:44:46.334080] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.366 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:17.625 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:17:17.884 10:44:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.884 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:17:17.884 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.884 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:17:17.884 10:44:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:17:19.788 10:44:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:19.788 10:44:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:19.788 10:44:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.788 10:44:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:17:19.788 10:44:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.788 10:44:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:17:19.788 10:44:48 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:19.788 [global] 00:17:19.788 thread=1 00:17:19.788 invalidate=1 00:17:19.788 rw=write 00:17:19.788 time_based=1 00:17:19.788 runtime=1 00:17:19.788 ioengine=libaio 00:17:19.788 direct=1 00:17:19.788 bs=4096 00:17:19.788 iodepth=1 00:17:19.788 norandommap=0 00:17:19.788 numjobs=1 00:17:19.788 00:17:19.788 verify_dump=1 00:17:19.788 verify_backlog=512 00:17:19.788 verify_state_save=0 00:17:19.788 do_verify=1 00:17:19.788 verify=crc32c-intel 00:17:19.788 [job0] 00:17:19.788 filename=/dev/nvme0n1 00:17:20.045 Could not set queue depth (nvme0n1) 00:17:20.304 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.304 fio-3.35 00:17:20.304 Starting 1 thread 00:17:21.241 00:17:21.241 job0: (groupid=0, jobs=1): err= 0: pid=4071627: Mon Jun 10 10:44:50 2024 00:17:21.241 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:17:21.241 slat (nsec): min=6492, max=55482, avg=7305.19, stdev=1389.27 00:17:21.241 clat (usec): min=55, max=219, avg=63.56, stdev= 6.23 00:17:21.241 lat (usec): min=61, max=266, avg=70.87, stdev= 7.04 00:17:21.241 clat percentiles (usec): 00:17:21.241 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 59], 20.00th=[ 61], 00:17:21.241 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 63], 60.00th=[ 64], 00:17:21.241 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 71], 00:17:21.241 | 99.00th=[ 76], 99.50th=[ 86], 99.90th=[ 161], 99.95th=[ 182], 00:17:21.241 | 99.99th=[ 221] 00:17:21.241 write: IOPS=7050, BW=27.5MiB/s (28.9MB/s)(27.6MiB/1001msec); 0 zone resets 00:17:21.241 slat (nsec): min=8290, max=38023, avg=9192.19, stdev=988.66 00:17:21.241 clat (usec): min=49, max=129, avg=61.90, stdev= 3.89 00:17:21.241 lat (usec): min=62, max=143, avg=71.09, stdev= 4.09 00:17:21.241 clat percentiles (usec): 00:17:21.241 | 1.00th=[ 56], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 59], 00:17:21.241 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:17:21.241 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 69], 00:17:21.241 | 99.00th=[ 72], 99.50th=[ 75], 99.90th=[ 85], 99.95th=[ 102], 00:17:21.241 | 99.99th=[ 130] 00:17:21.241 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:17:21.241 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:17:21.241 lat (usec) : 50=0.01%, 100=99.80%, 250=0.20% 00:17:21.241 cpu : usr=7.00%, sys=15.40%, ctx=13714, majf=0, minf=2 00:17:21.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.241 issued rwts: total=6656,7058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.241 00:17:21.241 Run status group 0 (all jobs): 00:17:21.241 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:17:21.241 WRITE: bw=27.5MiB/s (28.9MB/s), 27.5MiB/s-27.5MiB/s (28.9MB/s-28.9MB/s), io=27.6MiB (28.9MB), run=1001-1001msec 00:17:21.241 00:17:21.241 Disk stats (read/write): 00:17:21.241 nvme0n1: ios=6194/6171, merge=0/0, ticks=360/359, in_queue=719, util=90.68% 00:17:21.241 10:44:50 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:23.146 rmmod nvme_rdma 00:17:23.146 rmmod nvme_fabrics 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4071008 ']' 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4071008 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 4071008 ']' 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 4071008 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:23.146 10:44:51 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4071008 00:17:23.146 10:44:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:23.146 10:44:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:23.146 10:44:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4071008' 00:17:23.146 killing process with pid 4071008 00:17:23.146 10:44:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 4071008 00:17:23.146 10:44:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 4071008 00:17:23.405 10:44:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.405 10:44:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:23.405 00:17:23.405 real 0m12.836s 00:17:23.405 user 0m34.562s 00:17:23.405 sys 0m5.118s 00:17:23.405 10:44:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:23.405 10:44:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:23.405 ************************************ 00:17:23.405 END TEST nvmf_nmic 00:17:23.405 ************************************ 00:17:23.405 10:44:52 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:17:23.405 10:44:52 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:23.405 10:44:52 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:23.405 10:44:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:23.405 ************************************ 00:17:23.405 START TEST nvmf_fio_target 00:17:23.405 ************************************ 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:17:23.405 * Looking for test storage... 00:17:23.405 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.405 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.664 10:44:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:30.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:30.230 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.230 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@377 -- # modinfo irdma 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:30.231 Found net devices under 0000:af:00.0: cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:30.231 Found net devices under 0000:af:00.1: cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:17:30.231 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:30.231 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:30.231 altname enp175s0f0np0 00:17:30.231 altname ens801f0np0 00:17:30.231 inet 192.168.100.8/24 scope global cvl_0_0 00:17:30.231 valid_lft forever preferred_lft forever 00:17:30.231 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:30.231 valid_lft forever preferred_lft forever 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:17:30.231 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:30.231 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:30.231 altname enp175s0f1np1 00:17:30.231 altname ens801f1np1 00:17:30.231 inet 192.168.100.9/24 scope global cvl_0_1 00:17:30.231 valid_lft forever preferred_lft forever 00:17:30.231 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:30.231 valid_lft forever preferred_lft forever 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:30.231 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:30.232 192.168.100.9' 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:30.232 192.168.100.9' 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:30.232 192.168.100.9' 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4075630 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4075630 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 4075630 ']' 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:30.232 10:44:58 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.232 [2024-06-10 10:44:58.565367] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:30.232 [2024-06-10 10:44:58.565407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.232 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.232 [2024-06-10 10:44:58.627344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.232 [2024-06-10 10:44:58.703780] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.232 [2024-06-10 10:44:58.703818] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.232 [2024-06-10 10:44:58.703824] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.232 [2024-06-10 10:44:58.703830] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.232 [2024-06-10 10:44:58.703835] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.232 [2024-06-10 10:44:58.703878] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.232 [2024-06-10 10:44:58.703903] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.232 [2024-06-10 10:44:58.703993] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.232 [2024-06-10 10:44:58.703994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.490 10:44:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:30.490 10:44:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:17:30.490 10:44:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.490 10:44:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:30.490 10:44:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.490 10:44:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.490 10:44:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:30.748 [2024-06-10 10:44:59.569184] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x10658f0/0x1064f30) succeed. 00:17:30.748 [2024-06-10 10:44:59.577997] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1066ca0/0x10654b0) succeed. 00:17:30.748 [2024-06-10 10:44:59.578020] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:30.748 10:44:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.006 10:44:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:31.006 10:44:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.007 10:44:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:31.007 10:44:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.265 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:31.265 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.523 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:31.523 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:31.781 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.781 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:31.781 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:32.039 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:32.039 10:45:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:32.297 10:45:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:32.297 10:45:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:32.555 10:45:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:32.555 10:45:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:32.555 10:45:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:32.812 10:45:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:32.812 10:45:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:33.362 10:45:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:33.362 [2024-06-10 10:45:02.030168] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:33.362 10:45:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:33.362 10:45:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:33.620 10:45:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:33.620 10:45:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:33.620 10:45:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:17:33.620 10:45:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.620 10:45:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:17:33.620 10:45:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:17:33.620 10:45:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:17:36.146 10:45:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:36.146 10:45:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:36.146 10:45:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.146 10:45:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:17:36.146 10:45:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.146 10:45:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:17:36.146 10:45:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:36.146 [global] 00:17:36.146 thread=1 00:17:36.146 invalidate=1 00:17:36.146 rw=write 00:17:36.146 time_based=1 00:17:36.146 runtime=1 00:17:36.146 ioengine=libaio 00:17:36.146 direct=1 00:17:36.146 bs=4096 00:17:36.146 iodepth=1 00:17:36.146 norandommap=0 00:17:36.146 numjobs=1 00:17:36.146 00:17:36.146 verify_dump=1 00:17:36.146 verify_backlog=512 00:17:36.146 verify_state_save=0 00:17:36.146 do_verify=1 00:17:36.146 verify=crc32c-intel 00:17:36.146 [job0] 00:17:36.146 filename=/dev/nvme0n1 00:17:36.146 [job1] 00:17:36.146 filename=/dev/nvme0n2 00:17:36.146 [job2] 00:17:36.146 filename=/dev/nvme0n3 00:17:36.146 [job3] 00:17:36.146 filename=/dev/nvme0n4 00:17:36.146 Could not set queue depth (nvme0n1) 00:17:36.146 Could not set queue depth (nvme0n2) 00:17:36.146 Could not set queue depth (nvme0n3) 00:17:36.146 Could not set queue depth (nvme0n4) 00:17:36.146 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.146 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.146 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.146 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.146 fio-3.35 00:17:36.146 Starting 4 threads 00:17:37.519 00:17:37.519 job0: (groupid=0, jobs=1): err= 0: pid=4077079: Mon Jun 10 10:45:06 2024 00:17:37.519 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:17:37.519 slat (nsec): min=6250, max=23818, avg=7164.57, stdev=952.24 00:17:37.519 clat (usec): min=69, max=215, avg=86.87, stdev=13.65 00:17:37.519 lat (usec): min=76, max=223, avg=94.04, stdev=13.73 00:17:37.519 clat percentiles (usec): 00:17:37.519 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:17:37.519 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 85], 00:17:37.519 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 124], 00:17:37.519 | 99.00th=[ 133], 99.50th=[ 137], 99.90th=[ 153], 99.95th=[ 167], 00:17:37.519 | 99.99th=[ 217] 00:17:37.519 write: IOPS=5265, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1001msec); 0 zone resets 00:17:37.519 slat (nsec): min=8052, max=39855, avg=9246.88, stdev=1113.81 00:17:37.519 clat (usec): min=68, max=166, avg=85.23, stdev=12.81 00:17:37.519 lat (usec): min=77, max=176, avg=94.48, stdev=12.92 00:17:37.519 clat percentiles (usec): 00:17:37.519 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:17:37.519 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 84], 00:17:37.519 | 70.00th=[ 85], 80.00th=[ 89], 90.00th=[ 110], 95.00th=[ 119], 00:17:37.519 | 99.00th=[ 127], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 161], 00:17:37.519 | 99.99th=[ 167] 00:17:37.519 bw ( KiB/s): min=22850, max=22850, per=32.07%, avg=22850.00, stdev= 0.00, samples=1 00:17:37.519 iops : min= 5712, max= 5712, avg=5712.00, stdev= 0.00, samples=1 00:17:37.519 lat (usec) : 100=89.37%, 250=10.63% 00:17:37.519 cpu : usr=5.10%, sys=12.10%, ctx=10391, majf=0, minf=1 00:17:37.519 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.519 issued rwts: total=5120,5271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.519 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.519 job1: (groupid=0, jobs=1): err= 0: pid=4077080: Mon Jun 10 10:45:06 2024 00:17:37.519 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:17:37.519 slat (nsec): min=6151, max=17613, avg=7148.22, stdev=788.96 00:17:37.519 clat (usec): min=79, max=187, avg=126.25, stdev=10.72 00:17:37.519 lat (usec): min=86, max=194, avg=133.40, stdev=10.68 00:17:37.519 clat percentiles (usec): 00:17:37.519 | 1.00th=[ 99], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 119], 00:17:37.519 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 129], 00:17:37.519 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:17:37.519 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 188], 00:17:37.519 | 99.99th=[ 188] 00:17:37.519 write: IOPS=3941, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1001msec); 0 zone resets 00:17:37.519 slat (nsec): min=8029, max=35336, avg=9104.84, stdev=1063.94 00:17:37.519 clat (usec): min=74, max=304, avg=119.26, stdev=10.43 00:17:37.519 lat (usec): min=83, max=313, avg=128.37, stdev=10.41 00:17:37.519 clat percentiles (usec): 00:17:37.519 | 1.00th=[ 95], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 113], 00:17:37.519 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:17:37.519 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 135], 00:17:37.519 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 184], 00:17:37.519 | 99.99th=[ 306] 00:17:37.519 bw ( KiB/s): min=16384, max=16384, per=23.00%, avg=16384.00, stdev= 0.00, samples=1 00:17:37.519 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:17:37.519 lat (usec) : 100=1.69%, 250=98.30%, 500=0.01% 00:17:37.519 cpu : usr=4.40%, sys=8.10%, ctx=7529, majf=0, minf=1 00:17:37.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.520 issued rwts: total=3584,3945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.520 job2: (groupid=0, jobs=1): err= 0: pid=4077082: Mon Jun 10 10:45:06 2024 00:17:37.520 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:17:37.520 slat (nsec): min=3862, max=21455, avg=6801.31, stdev=1022.80 00:17:37.520 clat (usec): min=76, max=271, avg=99.22, stdev=12.78 00:17:37.520 lat (usec): min=83, max=278, avg=106.03, stdev=12.25 00:17:37.520 clat percentiles (usec): 00:17:37.520 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 91], 00:17:37.520 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 98], 00:17:37.520 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 129], 00:17:37.520 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 157], 99.95th=[ 174], 00:17:37.520 | 99.99th=[ 273] 00:17:37.520 write: IOPS=4664, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1001msec); 0 zone resets 00:17:37.520 slat (nsec): min=4592, max=35369, avg=8761.72, stdev=1513.84 00:17:37.520 clat (usec): min=74, max=169, avg=97.03, stdev=11.76 00:17:37.520 lat (usec): min=80, max=175, avg=105.79, stdev=10.93 00:17:37.520 clat percentiles (usec): 00:17:37.520 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:17:37.520 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 96], 00:17:37.520 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 117], 95.00th=[ 124], 00:17:37.520 | 99.00th=[ 133], 99.50th=[ 137], 99.90th=[ 159], 99.95th=[ 165], 00:17:37.520 | 99.99th=[ 169] 00:17:37.520 bw ( KiB/s): min=20480, max=20480, per=28.74%, avg=20480.00, stdev= 0.00, samples=1 00:17:37.520 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:17:37.520 lat (usec) : 100=70.86%, 250=29.12%, 500=0.02% 00:17:37.520 cpu : usr=5.20%, sys=9.30%, ctx=9277, majf=0, minf=2 00:17:37.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.520 issued rwts: total=4608,4669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.520 job3: (groupid=0, jobs=1): err= 0: pid=4077084: Mon Jun 10 10:45:06 2024 00:17:37.520 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:17:37.520 slat (nsec): min=6421, max=24167, avg=7340.11, stdev=723.69 00:17:37.520 clat (usec): min=87, max=180, avg=126.05, stdev= 9.26 00:17:37.520 lat (usec): min=94, max=187, avg=133.39, stdev= 9.26 00:17:37.520 clat percentiles (usec): 00:17:37.520 | 1.00th=[ 104], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:17:37.520 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 128], 00:17:37.520 | 70.00th=[ 131], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 141], 00:17:37.520 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 174], 99.95th=[ 180], 00:17:37.520 | 99.99th=[ 182] 00:17:37.520 write: IOPS=3941, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1001msec); 0 zone resets 00:17:37.520 slat (nsec): min=8216, max=39208, avg=9338.70, stdev=1093.94 00:17:37.520 clat (usec): min=82, max=224, avg=119.00, stdev= 8.87 00:17:37.520 lat (usec): min=91, max=234, avg=128.34, stdev= 8.92 00:17:37.520 clat percentiles (usec): 00:17:37.520 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 113], 00:17:37.520 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 121], 00:17:37.520 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 129], 95.00th=[ 133], 00:17:37.520 | 99.00th=[ 145], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 176], 00:17:37.520 | 99.99th=[ 225] 00:17:37.520 bw ( KiB/s): min=16351, max=16351, per=22.95%, avg=16351.00, stdev= 0.00, samples=1 00:17:37.520 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:17:37.520 lat (usec) : 100=0.89%, 250=99.11% 00:17:37.520 cpu : usr=5.10%, sys=7.70%, ctx=7529, majf=0, minf=1 00:17:37.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.520 issued rwts: total=3584,3945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.520 00:17:37.520 Run status group 0 (all jobs): 00:17:37.520 READ: bw=65.9MiB/s (69.1MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1001-1001msec 00:17:37.520 WRITE: bw=69.6MiB/s (73.0MB/s), 15.4MiB/s-20.6MiB/s (16.1MB/s-21.6MB/s), io=69.6MiB (73.0MB), run=1001-1001msec 00:17:37.520 00:17:37.520 Disk stats (read/write): 00:17:37.520 nvme0n1: ios=4658/4638, merge=0/0, ticks=369/362, in_queue=731, util=86.07% 00:17:37.520 nvme0n2: ios=3072/3312, merge=0/0, ticks=368/350, in_queue=718, util=86.69% 00:17:37.520 nvme0n3: ios=4042/4096, merge=0/0, ticks=363/335, in_queue=698, util=88.85% 00:17:37.520 nvme0n4: ios=3072/3313, merge=0/0, ticks=362/368, in_queue=730, util=89.60% 00:17:37.520 10:45:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:37.520 [global] 00:17:37.520 thread=1 00:17:37.520 invalidate=1 00:17:37.520 rw=randwrite 00:17:37.520 time_based=1 00:17:37.520 runtime=1 00:17:37.520 ioengine=libaio 00:17:37.520 direct=1 00:17:37.520 bs=4096 00:17:37.520 iodepth=1 00:17:37.520 norandommap=0 00:17:37.520 numjobs=1 00:17:37.520 00:17:37.520 verify_dump=1 00:17:37.520 verify_backlog=512 00:17:37.520 verify_state_save=0 00:17:37.520 do_verify=1 00:17:37.520 verify=crc32c-intel 00:17:37.520 [job0] 00:17:37.520 filename=/dev/nvme0n1 00:17:37.520 [job1] 00:17:37.520 filename=/dev/nvme0n2 00:17:37.520 [job2] 00:17:37.520 filename=/dev/nvme0n3 00:17:37.520 [job3] 00:17:37.520 filename=/dev/nvme0n4 00:17:37.520 Could not set queue depth (nvme0n1) 00:17:37.520 Could not set queue depth (nvme0n2) 00:17:37.520 Could not set queue depth (nvme0n3) 00:17:37.520 Could not set queue depth (nvme0n4) 00:17:37.520 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.520 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.520 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.520 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:37.520 fio-3.35 00:17:37.520 Starting 4 threads 00:17:38.896 00:17:38.896 job0: (groupid=0, jobs=1): err= 0: pid=4077454: Mon Jun 10 10:45:07 2024 00:17:38.896 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:17:38.896 slat (nsec): min=8266, max=36871, avg=9912.84, stdev=1373.52 00:17:38.896 clat (usec): min=80, max=189, avg=145.57, stdev= 7.63 00:17:38.896 lat (usec): min=90, max=200, avg=155.49, stdev= 7.58 00:17:38.896 clat percentiles (usec): 00:17:38.896 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:17:38.896 | 30.00th=[ 143], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:17:38.896 | 70.00th=[ 149], 80.00th=[ 151], 90.00th=[ 155], 95.00th=[ 159], 00:17:38.896 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 186], 99.95th=[ 188], 00:17:38.896 | 99.99th=[ 190] 00:17:38.896 write: IOPS=3380, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec); 0 zone resets 00:17:38.896 slat (nsec): min=8459, max=61667, avg=12263.88, stdev=1847.38 00:17:38.896 clat (usec): min=79, max=247, avg=136.24, stdev= 7.15 00:17:38.896 lat (usec): min=90, max=261, avg=148.51, stdev= 7.08 00:17:38.896 clat percentiles (usec): 00:17:38.896 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 133], 00:17:38.896 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:17:38.896 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 149], 00:17:38.896 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 188], 00:17:38.896 | 99.99th=[ 247] 00:17:38.896 bw ( KiB/s): min=13784, max=13784, per=24.19%, avg=13784.00, stdev= 0.00, samples=1 00:17:38.896 iops : min= 3446, max= 3446, avg=3446.00, stdev= 0.00, samples=1 00:17:38.896 lat (usec) : 100=0.06%, 250=99.94% 00:17:38.896 cpu : usr=5.20%, sys=9.10%, ctx=6458, majf=0, minf=2 00:17:38.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.896 issued rwts: total=3072,3384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.896 job1: (groupid=0, jobs=1): err= 0: pid=4077455: Mon Jun 10 10:45:07 2024 00:17:38.896 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:17:38.896 slat (nsec): min=6216, max=33652, avg=7843.92, stdev=1530.24 00:17:38.896 clat (usec): min=73, max=193, avg=147.61, stdev= 7.55 00:17:38.896 lat (usec): min=80, max=200, avg=155.46, stdev= 7.69 00:17:38.896 clat percentiles (usec): 00:17:38.896 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:17:38.896 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 147], 60.00th=[ 149], 00:17:38.896 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 161], 00:17:38.896 | 99.00th=[ 169], 99.50th=[ 172], 99.90th=[ 182], 99.95th=[ 186], 00:17:38.896 | 99.99th=[ 194] 00:17:38.896 write: IOPS=3392, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec); 0 zone resets 00:17:38.896 slat (nsec): min=8182, max=38083, avg=10220.64, stdev=1660.82 00:17:38.896 clat (usec): min=70, max=249, avg=138.19, stdev= 7.30 00:17:38.896 lat (usec): min=80, max=261, avg=148.41, stdev= 7.30 00:17:38.896 clat percentiles (usec): 00:17:38.896 | 1.00th=[ 122], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:17:38.896 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 139], 00:17:38.896 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 149], 00:17:38.897 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 174], 99.95th=[ 186], 00:17:38.897 | 99.99th=[ 251] 00:17:38.897 bw ( KiB/s): min=13880, max=13880, per=24.36%, avg=13880.00, stdev= 0.00, samples=1 00:17:38.897 iops : min= 3470, max= 3470, avg=3470.00, stdev= 0.00, samples=1 00:17:38.897 lat (usec) : 100=0.26%, 250=99.74% 00:17:38.897 cpu : usr=4.80%, sys=9.00%, ctx=6468, majf=0, minf=1 00:17:38.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.897 issued rwts: total=3072,3396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.897 job2: (groupid=0, jobs=1): err= 0: pid=4077456: Mon Jun 10 10:45:07 2024 00:17:38.897 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:17:38.897 slat (nsec): min=7113, max=31085, avg=10082.68, stdev=1833.92 00:17:38.897 clat (usec): min=114, max=194, avg=145.65, stdev= 8.64 00:17:38.897 lat (usec): min=126, max=204, avg=155.73, stdev= 8.45 00:17:38.897 clat percentiles (usec): 00:17:38.897 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:17:38.897 | 30.00th=[ 143], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:17:38.897 | 70.00th=[ 149], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 161], 00:17:38.897 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 194], 00:17:38.897 | 99.99th=[ 196] 00:17:38.897 write: IOPS=3377, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec); 0 zone resets 00:17:38.897 slat (nsec): min=8558, max=39117, avg=12179.20, stdev=2084.63 00:17:38.897 clat (usec): min=81, max=214, avg=136.41, stdev= 7.95 00:17:38.897 lat (usec): min=92, max=226, avg=148.59, stdev= 7.71 00:17:38.897 clat percentiles (usec): 00:17:38.897 | 1.00th=[ 119], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 133], 00:17:38.897 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:17:38.897 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 151], 00:17:38.897 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 186], 00:17:38.897 | 99.99th=[ 215] 00:17:38.897 bw ( KiB/s): min=13760, max=13760, per=24.15%, avg=13760.00, stdev= 0.00, samples=1 00:17:38.897 iops : min= 3440, max= 3440, avg=3440.00, stdev= 0.00, samples=1 00:17:38.897 lat (usec) : 100=0.11%, 250=99.89% 00:17:38.897 cpu : usr=5.80%, sys=8.70%, ctx=6453, majf=0, minf=1 00:17:38.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.897 issued rwts: total=3072,3381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.897 job3: (groupid=0, jobs=1): err= 0: pid=4077457: Mon Jun 10 10:45:07 2024 00:17:38.897 read: IOPS=3643, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1001msec) 00:17:38.897 slat (nsec): min=3776, max=21760, avg=7113.68, stdev=1273.55 00:17:38.897 clat (usec): min=76, max=186, avg=121.90, stdev=29.67 00:17:38.897 lat (usec): min=83, max=190, avg=129.02, stdev=29.44 00:17:38.897 clat percentiles (usec): 00:17:38.897 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 88], 00:17:38.897 | 30.00th=[ 91], 40.00th=[ 97], 50.00th=[ 143], 60.00th=[ 145], 00:17:38.897 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 155], 00:17:38.897 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 176], 99.95th=[ 176], 00:17:38.897 | 99.99th=[ 188] 00:17:38.897 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:17:38.897 slat (nsec): min=4444, max=33969, avg=9041.05, stdev=1813.23 00:17:38.897 clat (usec): min=73, max=169, avg=116.57, stdev=26.47 00:17:38.897 lat (usec): min=83, max=176, avg=125.61, stdev=26.29 00:17:38.897 clat percentiles (usec): 00:17:38.897 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:17:38.897 | 30.00th=[ 89], 40.00th=[ 94], 50.00th=[ 135], 60.00th=[ 137], 00:17:38.897 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 147], 00:17:38.897 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 159], 99.95th=[ 165], 00:17:38.897 | 99.99th=[ 169] 00:17:38.897 bw ( KiB/s): min=13912, max=13912, per=24.42%, avg=13912.00, stdev= 0.00, samples=1 00:17:38.897 iops : min= 3478, max= 3478, avg=3478.00, stdev= 0.00, samples=1 00:17:38.897 lat (usec) : 100=42.58%, 250=57.42% 00:17:38.897 cpu : usr=5.40%, sys=7.30%, ctx=7743, majf=0, minf=1 00:17:38.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.897 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.897 00:17:38.897 Run status group 0 (all jobs): 00:17:38.897 READ: bw=50.2MiB/s (52.6MB/s), 12.0MiB/s-14.2MiB/s (12.6MB/s-14.9MB/s), io=50.2MiB (52.7MB), run=1001-1001msec 00:17:38.897 WRITE: bw=55.6MiB/s (58.3MB/s), 13.2MiB/s-16.0MiB/s (13.8MB/s-16.8MB/s), io=55.7MiB (58.4MB), run=1001-1001msec 00:17:38.897 00:17:38.897 Disk stats (read/write): 00:17:38.897 nvme0n1: ios=2610/2982, merge=0/0, ticks=355/381, in_queue=736, util=86.37% 00:17:38.897 nvme0n2: ios=2560/2994, merge=0/0, ticks=351/392, in_queue=743, util=86.80% 00:17:38.897 nvme0n3: ios=2560/2979, merge=0/0, ticks=353/381, in_queue=734, util=88.97% 00:17:38.897 nvme0n4: ios=3072/3179, merge=0/0, ticks=375/362, in_queue=737, util=89.62% 00:17:38.897 10:45:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:38.897 [global] 00:17:38.897 thread=1 00:17:38.897 invalidate=1 00:17:38.897 rw=write 00:17:38.897 time_based=1 00:17:38.897 runtime=1 00:17:38.897 ioengine=libaio 00:17:38.897 direct=1 00:17:38.897 bs=4096 00:17:38.897 iodepth=128 00:17:38.897 norandommap=0 00:17:38.897 numjobs=1 00:17:38.897 00:17:38.897 verify_dump=1 00:17:38.897 verify_backlog=512 00:17:38.897 verify_state_save=0 00:17:38.897 do_verify=1 00:17:38.897 verify=crc32c-intel 00:17:38.897 [job0] 00:17:38.897 filename=/dev/nvme0n1 00:17:38.897 [job1] 00:17:38.897 filename=/dev/nvme0n2 00:17:38.897 [job2] 00:17:38.897 filename=/dev/nvme0n3 00:17:38.897 [job3] 00:17:38.897 filename=/dev/nvme0n4 00:17:38.897 Could not set queue depth (nvme0n1) 00:17:38.897 Could not set queue depth (nvme0n2) 00:17:38.897 Could not set queue depth (nvme0n3) 00:17:38.897 Could not set queue depth (nvme0n4) 00:17:39.155 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:39.156 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:39.156 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:39.156 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:39.156 fio-3.35 00:17:39.156 Starting 4 threads 00:17:40.550 00:17:40.550 job0: (groupid=0, jobs=1): err= 0: pid=4077898: Mon Jun 10 10:45:09 2024 00:17:40.550 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:17:40.550 slat (nsec): min=1441, max=3633.9k, avg=76314.42, stdev=298494.44 00:17:40.550 clat (usec): min=5594, max=18039, avg=9930.39, stdev=3566.64 00:17:40.550 lat (usec): min=5730, max=18041, avg=10006.71, stdev=3582.49 00:17:40.550 clat percentiles (usec): 00:17:40.550 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 7046], 00:17:40.550 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8291], 00:17:40.550 | 70.00th=[12518], 80.00th=[14615], 90.00th=[15533], 95.00th=[16188], 00:17:40.550 | 99.00th=[17957], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:17:40.550 | 99.99th=[17957] 00:17:40.550 write: IOPS=6801, BW=26.6MiB/s (27.9MB/s)(26.7MiB/1004msec); 0 zone resets 00:17:40.550 slat (nsec): min=1965, max=2973.9k, avg=69526.75, stdev=259013.37 00:17:40.550 clat (usec): min=2504, max=19935, avg=8938.36, stdev=3160.28 00:17:40.550 lat (usec): min=4350, max=19938, avg=9007.89, stdev=3174.74 00:17:40.550 clat percentiles (usec): 00:17:40.550 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6587], 00:17:40.550 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7767], 00:17:40.550 | 70.00th=[ 7963], 80.00th=[13304], 90.00th=[14484], 95.00th=[14746], 00:17:40.550 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19792], 99.95th=[20055], 00:17:40.550 | 99.99th=[20055] 00:17:40.550 bw ( KiB/s): min=25472, max=28136, per=24.45%, avg=26804.00, stdev=1883.73, samples=2 00:17:40.550 iops : min= 6368, max= 7034, avg=6701.00, stdev=470.93, samples=2 00:17:40.550 lat (msec) : 4=0.01%, 10=69.74%, 20=30.26% 00:17:40.550 cpu : usr=2.79%, sys=4.29%, ctx=1292, majf=0, minf=1 00:17:40.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.550 issued rwts: total=6656,6829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.550 job1: (groupid=0, jobs=1): err= 0: pid=4077909: Mon Jun 10 10:45:09 2024 00:17:40.550 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:17:40.550 slat (nsec): min=1425, max=3651.2k, avg=61733.61, stdev=264656.88 00:17:40.550 clat (usec): min=5762, max=21079, avg=8102.89, stdev=2592.15 00:17:40.550 lat (usec): min=5765, max=21081, avg=8164.62, stdev=2607.65 00:17:40.550 clat percentiles (usec): 00:17:40.550 | 1.00th=[ 6194], 5.00th=[ 6390], 10.00th=[ 6456], 20.00th=[ 6718], 00:17:40.550 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7635], 00:17:40.550 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[11600], 95.00th=[14091], 00:17:40.550 | 99.00th=[17957], 99.50th=[17957], 99.90th=[21103], 99.95th=[21103], 00:17:40.550 | 99.99th=[21103] 00:17:40.550 write: IOPS=8197, BW=32.0MiB/s (33.6MB/s)(32.1MiB/1004msec); 0 zone resets 00:17:40.550 slat (nsec): min=1998, max=3635.8k, avg=57437.63, stdev=238292.85 00:17:40.550 clat (usec): min=3660, max=13523, avg=7380.26, stdev=1600.85 00:17:40.550 lat (usec): min=5423, max=13532, avg=7437.70, stdev=1611.56 00:17:40.550 clat percentiles (usec): 00:17:40.550 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6390], 00:17:40.550 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:17:40.550 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8225], 95.00th=[12780], 00:17:40.550 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13566], 99.95th=[13566], 00:17:40.550 | 99.99th=[13566] 00:17:40.550 bw ( KiB/s): min=28672, max=36864, per=29.89%, avg=32768.00, stdev=5792.62, samples=2 00:17:40.550 iops : min= 7168, max= 9216, avg=8192.00, stdev=1448.15, samples=2 00:17:40.550 lat (msec) : 4=0.01%, 10=91.58%, 20=8.23%, 50=0.18% 00:17:40.550 cpu : usr=3.49%, sys=4.69%, ctx=1410, majf=0, minf=1 00:17:40.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.550 issued rwts: total=8192,8230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.550 job2: (groupid=0, jobs=1): err= 0: pid=4077923: Mon Jun 10 10:45:09 2024 00:17:40.550 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:17:40.550 slat (nsec): min=1432, max=2826.8k, avg=78972.74, stdev=303236.50 00:17:40.550 clat (usec): min=349, max=17055, avg=10427.50, stdev=3339.16 00:17:40.550 lat (usec): min=1268, max=17060, avg=10506.47, stdev=3355.27 00:17:40.550 clat percentiles (usec): 00:17:40.550 | 1.00th=[ 4293], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 8160], 00:17:40.550 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[ 9765], 00:17:40.550 | 70.00th=[10028], 80.00th=[15270], 90.00th=[16188], 95.00th=[16581], 00:17:40.550 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[17171], 00:17:40.550 | 99.99th=[17171] 00:17:40.550 write: IOPS=6333, BW=24.7MiB/s (25.9MB/s)(24.9MiB/1006msec); 0 zone resets 00:17:40.550 slat (usec): min=2, max=3878, avg=77.73, stdev=304.00 00:17:40.550 clat (usec): min=2128, max=16648, avg=9967.92, stdev=3007.77 00:17:40.550 lat (usec): min=2141, max=16651, avg=10045.65, stdev=3021.37 00:17:40.550 clat percentiles (usec): 00:17:40.550 | 1.00th=[ 4178], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 7898], 00:17:40.550 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:17:40.550 | 70.00th=[ 9634], 80.00th=[14222], 90.00th=[15270], 95.00th=[15664], 00:17:40.550 | 99.00th=[16450], 99.50th=[16581], 99.90th=[16581], 99.95th=[16712], 00:17:40.550 | 99.99th=[16712] 00:17:40.550 bw ( KiB/s): min=21064, max=28896, per=22.78%, avg=24980.00, stdev=5538.06, samples=2 00:17:40.550 iops : min= 5266, max= 7224, avg=6245.00, stdev=1384.52, samples=2 00:17:40.550 lat (usec) : 500=0.01%, 750=0.01% 00:17:40.550 lat (msec) : 2=0.04%, 4=0.33%, 10=71.09%, 20=28.52% 00:17:40.550 cpu : usr=2.39%, sys=4.18%, ctx=1138, majf=0, minf=1 00:17:40.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:40.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.550 issued rwts: total=6144,6372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.550 job3: (groupid=0, jobs=1): err= 0: pid=4077928: Mon Jun 10 10:45:09 2024 00:17:40.550 read: IOPS=5931, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1004msec) 00:17:40.550 slat (nsec): min=1356, max=2786.9k, avg=86226.06, stdev=323141.52 00:17:40.550 clat (usec): min=2490, max=18989, avg=10955.59, stdev=3453.43 00:17:40.550 lat (usec): min=4879, max=18991, avg=11041.82, stdev=3465.89 00:17:40.550 clat percentiles (usec): 00:17:40.550 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8291], 00:17:40.550 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:17:40.551 | 70.00th=[13173], 80.00th=[15795], 90.00th=[16581], 95.00th=[16909], 00:17:40.551 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[19006], 00:17:40.551 | 99.99th=[19006] 00:17:40.551 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:17:40.551 slat (usec): min=2, max=2955, avg=76.78, stdev=290.52 00:17:40.551 clat (usec): min=6153, max=17879, avg=10065.32, stdev=2973.88 00:17:40.551 lat (usec): min=6832, max=17882, avg=10142.11, stdev=2983.07 00:17:40.551 clat percentiles (usec): 00:17:40.551 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 7898], 00:17:40.551 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:17:40.551 | 70.00th=[ 9503], 80.00th=[14091], 90.00th=[15401], 95.00th=[15926], 00:17:40.551 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:17:40.551 | 99.99th=[17957] 00:17:40.551 bw ( KiB/s): min=20760, max=28392, per=22.41%, avg=24576.00, stdev=5396.64, samples=2 00:17:40.551 iops : min= 5190, max= 7098, avg=6144.00, stdev=1349.16, samples=2 00:17:40.551 lat (msec) : 4=0.01%, 10=70.63%, 20=29.36% 00:17:40.551 cpu : usr=2.99%, sys=3.29%, ctx=1188, majf=0, minf=1 00:17:40.551 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:40.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:40.551 issued rwts: total=5955,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.551 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:40.551 00:17:40.551 Run status group 0 (all jobs): 00:17:40.551 READ: bw=105MiB/s (110MB/s), 23.2MiB/s-31.9MiB/s (24.3MB/s-33.4MB/s), io=105MiB (110MB), run=1004-1006msec 00:17:40.551 WRITE: bw=107MiB/s (112MB/s), 23.9MiB/s-32.0MiB/s (25.1MB/s-33.6MB/s), io=108MiB (113MB), run=1004-1006msec 00:17:40.551 00:17:40.551 Disk stats (read/write): 00:17:40.551 nvme0n1: ios=6062/6144, merge=0/0, ticks=14068/12631, in_queue=26699, util=86.07% 00:17:40.551 nvme0n2: ios=7429/7680, merge=0/0, ticks=13511/13104, in_queue=26615, util=86.69% 00:17:40.551 nvme0n3: ios=4788/5120, merge=0/0, ticks=30866/30523, in_queue=61389, util=88.85% 00:17:40.551 nvme0n4: ios=5074/5120, merge=0/0, ticks=13898/13018, in_queue=26916, util=89.60% 00:17:40.551 10:45:09 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:40.551 [global] 00:17:40.551 thread=1 00:17:40.551 invalidate=1 00:17:40.551 rw=randwrite 00:17:40.551 time_based=1 00:17:40.551 runtime=1 00:17:40.551 ioengine=libaio 00:17:40.551 direct=1 00:17:40.551 bs=4096 00:17:40.551 iodepth=128 00:17:40.551 norandommap=0 00:17:40.551 numjobs=1 00:17:40.551 00:17:40.551 verify_dump=1 00:17:40.551 verify_backlog=512 00:17:40.551 verify_state_save=0 00:17:40.551 do_verify=1 00:17:40.551 verify=crc32c-intel 00:17:40.551 [job0] 00:17:40.551 filename=/dev/nvme0n1 00:17:40.551 [job1] 00:17:40.551 filename=/dev/nvme0n2 00:17:40.551 [job2] 00:17:40.551 filename=/dev/nvme0n3 00:17:40.551 [job3] 00:17:40.551 filename=/dev/nvme0n4 00:17:40.551 Could not set queue depth (nvme0n1) 00:17:40.551 Could not set queue depth (nvme0n2) 00:17:40.551 Could not set queue depth (nvme0n3) 00:17:40.551 Could not set queue depth (nvme0n4) 00:17:40.816 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:40.816 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:40.816 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:40.816 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:40.816 fio-3.35 00:17:40.816 Starting 4 threads 00:17:42.194 00:17:42.194 job0: (groupid=0, jobs=1): err= 0: pid=4078557: Mon Jun 10 10:45:10 2024 00:17:42.194 read: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec) 00:17:42.194 slat (nsec): min=1394, max=1468.0k, avg=59457.34, stdev=203741.56 00:17:42.194 clat (usec): min=5696, max=13835, avg=7720.09, stdev=2126.91 00:17:42.194 lat (usec): min=5703, max=14330, avg=7779.55, stdev=2143.33 00:17:42.194 clat percentiles (usec): 00:17:42.194 | 1.00th=[ 6128], 5.00th=[ 6390], 10.00th=[ 6456], 20.00th=[ 6587], 00:17:42.194 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7111], 00:17:42.194 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[12911], 95.00th=[13435], 00:17:42.194 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13829], 99.95th=[13829], 00:17:42.194 | 99.99th=[13829] 00:17:42.194 write: IOPS=8556, BW=33.4MiB/s (35.0MB/s)(33.5MiB/1001msec); 0 zone resets 00:17:42.194 slat (nsec): min=1910, max=1542.9k, avg=57490.45, stdev=194308.52 00:17:42.194 clat (usec): min=449, max=15106, avg=7395.18, stdev=2103.20 00:17:42.194 lat (usec): min=1265, max=15109, avg=7452.67, stdev=2118.38 00:17:42.194 clat percentiles (usec): 00:17:42.194 | 1.00th=[ 5538], 5.00th=[ 6128], 10.00th=[ 6194], 20.00th=[ 6325], 00:17:42.194 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:17:42.194 | 70.00th=[ 6980], 80.00th=[ 7308], 90.00th=[12256], 95.00th=[12518], 00:17:42.194 | 99.00th=[13173], 99.50th=[14353], 99.90th=[15139], 99.95th=[15139], 00:17:42.194 | 99.99th=[15139] 00:17:42.194 bw ( KiB/s): min=29272, max=38232, per=31.15%, avg=33752.00, stdev=6335.68, samples=2 00:17:42.194 iops : min= 7318, max= 9558, avg=8438.00, stdev=1583.92, samples=2 00:17:42.194 lat (usec) : 500=0.01% 00:17:42.194 lat (msec) : 2=0.10%, 4=0.29%, 10=86.61%, 20=13.00% 00:17:42.194 cpu : usr=3.20%, sys=5.10%, ctx=1259, majf=0, minf=1 00:17:42.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:42.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.194 issued rwts: total=8192,8565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.194 job1: (groupid=0, jobs=1): err= 0: pid=4078560: Mon Jun 10 10:45:10 2024 00:17:42.194 read: IOPS=8258, BW=32.3MiB/s (33.8MB/s)(32.4MiB/1003msec) 00:17:42.194 slat (nsec): min=1334, max=990035, avg=58660.55, stdev=203778.09 00:17:42.194 clat (usec): min=1879, max=13948, avg=7609.82, stdev=2157.04 00:17:42.194 lat (usec): min=2732, max=14021, avg=7668.48, stdev=2165.60 00:17:42.194 clat percentiles (usec): 00:17:42.194 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6587], 00:17:42.194 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:17:42.194 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[12911], 95.00th=[13435], 00:17:42.194 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:17:42.194 | 99.99th=[13960] 00:17:42.194 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:17:42.194 slat (nsec): min=1814, max=2221.0k, avg=56686.12, stdev=196431.86 00:17:42.194 clat (usec): min=4606, max=13193, avg=7325.76, stdev=1936.57 00:17:42.194 lat (usec): min=5184, max=13957, avg=7382.45, stdev=1943.32 00:17:42.194 clat percentiles (usec): 00:17:42.194 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6390], 00:17:42.194 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6718], 00:17:42.194 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[12125], 95.00th=[12387], 00:17:42.194 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13173], 99.95th=[13173], 00:17:42.194 | 99.99th=[13173] 00:17:42.194 bw ( KiB/s): min=30168, max=39176, per=32.00%, avg=34672.00, stdev=6369.62, samples=2 00:17:42.194 iops : min= 7542, max= 9794, avg=8668.00, stdev=1592.40, samples=2 00:17:42.194 lat (msec) : 2=0.01%, 4=0.19%, 10=87.41%, 20=12.39% 00:17:42.194 cpu : usr=2.99%, sys=5.19%, ctx=1244, majf=0, minf=1 00:17:42.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:42.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.194 issued rwts: total=8283,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.194 job2: (groupid=0, jobs=1): err= 0: pid=4078564: Mon Jun 10 10:45:10 2024 00:17:42.194 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:17:42.194 slat (nsec): min=1431, max=2048.4k, avg=107588.66, stdev=361735.62 00:17:42.194 clat (usec): min=7801, max=17155, avg=13836.03, stdev=3270.56 00:17:42.194 lat (usec): min=7941, max=17157, avg=13943.62, stdev=3278.78 00:17:42.194 clat percentiles (usec): 00:17:42.194 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9372], 00:17:42.194 | 30.00th=[ 9503], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:17:42.194 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16712], 95.00th=[16909], 00:17:42.194 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:17:42.194 | 99.99th=[17171] 00:17:42.194 write: IOPS=4769, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1003msec); 0 zone resets 00:17:42.194 slat (nsec): min=1897, max=2938.1k, avg=102486.84, stdev=347254.99 00:17:42.194 clat (usec): min=2103, max=22011, avg=13279.52, stdev=3742.76 00:17:42.194 lat (usec): min=2115, max=22014, avg=13382.01, stdev=3755.61 00:17:42.194 clat percentiles (usec): 00:17:42.194 | 1.00th=[ 4047], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 8848], 00:17:42.194 | 30.00th=[ 9110], 40.00th=[14615], 50.00th=[15270], 60.00th=[15795], 00:17:42.194 | 70.00th=[16057], 80.00th=[16188], 90.00th=[16450], 95.00th=[16909], 00:17:42.194 | 99.00th=[18220], 99.50th=[20317], 99.90th=[21890], 99.95th=[21890], 00:17:42.194 | 99.99th=[21890] 00:17:42.194 bw ( KiB/s): min=15496, max=21760, per=17.19%, avg=18628.00, stdev=4429.32, samples=2 00:17:42.194 iops : min= 3874, max= 5440, avg=4657.00, stdev=1107.33, samples=2 00:17:42.194 lat (msec) : 4=0.30%, 10=33.20%, 20=66.19%, 50=0.31% 00:17:42.194 cpu : usr=1.90%, sys=3.69%, ctx=1693, majf=0, minf=1 00:17:42.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:42.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.194 issued rwts: total=4608,4784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.194 job3: (groupid=0, jobs=1): err= 0: pid=4078565: Mon Jun 10 10:45:10 2024 00:17:42.194 read: IOPS=4595, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:17:42.194 slat (nsec): min=1456, max=2329.4k, avg=104335.13, stdev=305014.95 00:17:42.194 clat (usec): min=2126, max=18177, avg=13479.28, stdev=3905.80 00:17:42.194 lat (usec): min=4159, max=18186, avg=13583.62, stdev=3924.87 00:17:42.194 clat percentiles (usec): 00:17:42.194 | 1.00th=[ 7177], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 8160], 00:17:42.194 | 30.00th=[ 8717], 40.00th=[15533], 50.00th=[15926], 60.00th=[16188], 00:17:42.194 | 70.00th=[16319], 80.00th=[16450], 90.00th=[16712], 95.00th=[16909], 00:17:42.194 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:17:42.194 | 99.99th=[18220] 00:17:42.194 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:17:42.194 slat (nsec): min=1939, max=2569.9k, avg=98948.91, stdev=304857.74 00:17:42.194 clat (usec): min=4160, max=17657, avg=12671.95, stdev=3967.65 00:17:42.194 lat (usec): min=4162, max=17659, avg=12770.90, stdev=3989.39 00:17:42.194 clat percentiles (usec): 00:17:42.194 | 1.00th=[ 6915], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7701], 00:17:42.194 | 30.00th=[ 8160], 40.00th=[13960], 50.00th=[15139], 60.00th=[15533], 00:17:42.194 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:17:42.194 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:17:42.194 | 99.99th=[17695] 00:17:42.194 bw ( KiB/s): min=15376, max=24576, per=18.43%, avg=19976.00, stdev=6505.38, samples=2 00:17:42.194 iops : min= 3844, max= 6144, avg=4994.00, stdev=1626.35, samples=2 00:17:42.194 lat (msec) : 4=0.01%, 10=35.68%, 20=64.31% 00:17:42.194 cpu : usr=1.80%, sys=3.89%, ctx=1669, majf=0, minf=1 00:17:42.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:42.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.194 issued rwts: total=4609,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.194 00:17:42.194 Run status group 0 (all jobs): 00:17:42.194 READ: bw=100MiB/s (105MB/s), 17.9MiB/s-32.3MiB/s (18.8MB/s-33.8MB/s), io=100MiB (105MB), run=1001-1003msec 00:17:42.194 WRITE: bw=106MiB/s (111MB/s), 18.6MiB/s-33.9MiB/s (19.5MB/s-35.5MB/s), io=106MiB (111MB), run=1001-1003msec 00:17:42.194 00:17:42.194 Disk stats (read/write): 00:17:42.194 nvme0n1: ios=6868/7168, merge=0/0, ticks=13382/13512, in_queue=26894, util=86.37% 00:17:42.194 nvme0n2: ios=6939/7168, merge=0/0, ticks=23245/23170, in_queue=46415, util=86.80% 00:17:42.194 nvme0n3: ios=4093/4096, merge=0/0, ticks=24318/23521, in_queue=47839, util=88.88% 00:17:42.194 nvme0n4: ios=4096/4438, merge=0/0, ticks=13460/13545, in_queue=27005, util=89.62% 00:17:42.194 10:45:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:42.194 10:45:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4078810 00:17:42.194 10:45:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:42.194 10:45:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:42.194 [global] 00:17:42.194 thread=1 00:17:42.194 invalidate=1 00:17:42.194 rw=read 00:17:42.194 time_based=1 00:17:42.194 runtime=10 00:17:42.194 ioengine=libaio 00:17:42.194 direct=1 00:17:42.194 bs=4096 00:17:42.194 iodepth=1 00:17:42.194 norandommap=1 00:17:42.194 numjobs=1 00:17:42.194 00:17:42.194 [job0] 00:17:42.194 filename=/dev/nvme0n1 00:17:42.194 [job1] 00:17:42.194 filename=/dev/nvme0n2 00:17:42.195 [job2] 00:17:42.195 filename=/dev/nvme0n3 00:17:42.195 [job3] 00:17:42.195 filename=/dev/nvme0n4 00:17:42.195 Could not set queue depth (nvme0n1) 00:17:42.195 Could not set queue depth (nvme0n2) 00:17:42.195 Could not set queue depth (nvme0n3) 00:17:42.195 Could not set queue depth (nvme0n4) 00:17:42.195 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:42.195 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:42.195 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:42.195 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:42.195 fio-3.35 00:17:42.195 Starting 4 threads 00:17:45.481 10:45:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:45.481 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=74924032, buflen=4096 00:17:45.481 fio: pid=4078959, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:45.481 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:45.481 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=104968192, buflen=4096 00:17:45.481 fio: pid=4078958, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:45.481 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:45.481 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:45.481 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=34996224, buflen=4096 00:17:45.481 fio: pid=4078950, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:45.481 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:45.481 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:45.741 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=29233152, buflen=4096 00:17:45.741 fio: pid=4078951, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:45.741 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:45.741 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:45.741 00:17:45.741 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4078950: Mon Jun 10 10:45:14 2024 00:17:45.741 read: IOPS=8246, BW=32.2MiB/s (33.8MB/s)(97.4MiB/3023msec) 00:17:45.741 slat (usec): min=6, max=13355, avg= 9.05, stdev=146.85 00:17:45.741 clat (usec): min=55, max=8765, avg=110.69, stdev=84.13 00:17:45.741 lat (usec): min=62, max=13477, avg=119.74, stdev=169.30 00:17:45.741 clat percentiles (usec): 00:17:45.741 | 1.00th=[ 68], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 81], 00:17:45.741 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 119], 00:17:45.741 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 163], 00:17:45.741 | 99.00th=[ 180], 99.50th=[ 196], 99.90th=[ 215], 99.95th=[ 219], 00:17:45.741 | 99.99th=[ 231] 00:17:45.741 bw ( KiB/s): min=24128, max=44232, per=29.24%, avg=33344.00, stdev=8399.03, samples=5 00:17:45.741 iops : min= 6032, max=11058, avg=8336.00, stdev=2099.76, samples=5 00:17:45.741 lat (usec) : 100=55.11%, 250=44.88% 00:17:45.741 lat (msec) : 10=0.01% 00:17:45.741 cpu : usr=2.61%, sys=9.53%, ctx=24933, majf=0, minf=1 00:17:45.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:45.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.741 issued rwts: total=24929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:45.741 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4078951: Mon Jun 10 10:45:14 2024 00:17:45.741 read: IOPS=7259, BW=28.4MiB/s (29.7MB/s)(91.9MiB/3240msec) 00:17:45.741 slat (usec): min=2, max=15981, avg= 9.80, stdev=176.59 00:17:45.741 clat (usec): min=53, max=8750, avg=126.41, stdev=101.70 00:17:45.741 lat (usec): min=61, max=16063, avg=136.21, stdev=203.27 00:17:45.741 clat percentiles (usec): 00:17:45.741 | 1.00th=[ 60], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 89], 00:17:45.741 | 30.00th=[ 119], 40.00th=[ 128], 50.00th=[ 135], 60.00th=[ 141], 00:17:45.741 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:17:45.741 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 212], 99.95th=[ 219], 00:17:45.741 | 99.99th=[ 8586] 00:17:45.741 bw ( KiB/s): min=24536, max=31511, per=24.28%, avg=27691.83, stdev=2533.77, samples=6 00:17:45.741 iops : min= 6134, max= 7877, avg=6922.83, stdev=633.22, samples=6 00:17:45.741 lat (usec) : 100=22.48%, 250=77.50%, 750=0.01% 00:17:45.741 lat (msec) : 10=0.01% 00:17:45.741 cpu : usr=2.75%, sys=8.00%, ctx=23528, majf=0, minf=1 00:17:45.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:45.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.741 issued rwts: total=23522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:45.741 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4078958: Mon Jun 10 10:45:14 2024 00:17:45.741 read: IOPS=9068, BW=35.4MiB/s (37.1MB/s)(100MiB/2826msec) 00:17:45.741 slat (usec): min=5, max=11871, avg= 8.21, stdev=88.72 00:17:45.741 clat (usec): min=69, max=8740, avg=100.49, stdev=57.84 00:17:45.741 lat (usec): min=75, max=12016, avg=108.70, stdev=106.27 00:17:45.741 clat percentiles (usec): 00:17:45.741 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:17:45.741 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:17:45.741 | 70.00th=[ 98], 80.00th=[ 105], 90.00th=[ 145], 95.00th=[ 149], 00:17:45.741 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 182], 99.95th=[ 188], 00:17:45.741 | 99.99th=[ 196] 00:17:45.741 bw ( KiB/s): min=26792, max=41032, per=32.86%, avg=37473.60, stdev=5996.69, samples=5 00:17:45.741 iops : min= 6698, max=10258, avg=9368.40, stdev=1499.17, samples=5 00:17:45.741 lat (usec) : 100=75.22%, 250=24.77%, 750=0.01% 00:17:45.741 lat (msec) : 10=0.01% 00:17:45.741 cpu : usr=3.54%, sys=10.30%, ctx=25630, majf=0, minf=1 00:17:45.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:45.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.741 issued rwts: total=25628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:45.741 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4078959: Mon Jun 10 10:45:14 2024 00:17:45.741 read: IOPS=6923, BW=27.0MiB/s (28.4MB/s)(71.5MiB/2642msec) 00:17:45.741 slat (nsec): min=5876, max=42902, avg=7454.35, stdev=1401.54 00:17:45.741 clat (usec): min=76, max=644, avg=135.15, stdev=23.26 00:17:45.741 lat (usec): min=84, max=651, avg=142.61, stdev=23.09 00:17:45.741 clat percentiles (usec): 00:17:45.741 | 1.00th=[ 85], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 121], 00:17:45.741 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 145], 00:17:45.741 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:17:45.741 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 208], 99.95th=[ 215], 00:17:45.741 | 99.99th=[ 225] 00:17:45.741 bw ( KiB/s): min=24496, max=34520, per=24.51%, avg=27947.20, stdev=3922.92, samples=5 00:17:45.741 iops : min= 6124, max= 8630, avg=6986.80, stdev=980.73, samples=5 00:17:45.741 lat (usec) : 100=14.61%, 250=85.38%, 750=0.01% 00:17:45.741 cpu : usr=2.20%, sys=7.72%, ctx=18294, majf=0, minf=2 00:17:45.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:45.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.741 issued rwts: total=18293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:45.741 00:17:45.741 Run status group 0 (all jobs): 00:17:45.741 READ: bw=111MiB/s (117MB/s), 27.0MiB/s-35.4MiB/s (28.4MB/s-37.1MB/s), io=361MiB (378MB), run=2642-3240msec 00:17:45.741 00:17:45.741 Disk stats (read/write): 00:17:45.741 nvme0n1: ios=23015/0, merge=0/0, ticks=2438/0, in_queue=2438, util=93.32% 00:17:45.742 nvme0n2: ios=21281/0, merge=0/0, ticks=2697/0, in_queue=2697, util=93.36% 00:17:45.742 nvme0n3: ios=25627/0, merge=0/0, ticks=2415/0, in_queue=2415, util=95.60% 00:17:45.742 nvme0n4: ios=17881/0, merge=0/0, ticks=2308/0, in_queue=2308, util=96.40% 00:17:46.001 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:46.001 10:45:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:46.260 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:46.260 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:46.260 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:46.260 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:46.519 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:46.519 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:46.778 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:46.778 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 4078810 00:17:46.778 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:46.778 10:45:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:47.726 nvmf hotplug test: fio failed as expected 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:47.726 rmmod nvme_rdma 00:17:47.726 rmmod nvme_fabrics 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4075630 ']' 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4075630 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 4075630 ']' 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 4075630 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4075630 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4075630' 00:17:47.726 killing process with pid 4075630 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 4075630 00:17:47.726 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 4075630 00:17:48.030 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.030 10:45:16 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:48.030 00:17:48.030 real 0m24.592s 00:17:48.030 user 1m47.418s 00:17:48.030 sys 0m8.729s 00:17:48.030 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:48.030 10:45:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.030 ************************************ 00:17:48.030 END TEST nvmf_fio_target 00:17:48.030 ************************************ 00:17:48.030 10:45:16 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:17:48.030 10:45:16 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:48.030 10:45:16 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:48.030 10:45:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:48.030 ************************************ 00:17:48.030 START TEST nvmf_bdevio 00:17:48.030 ************************************ 00:17:48.030 10:45:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:17:48.030 * Looking for test storage... 00:17:48.030 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.030 10:45:17 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:48.289 10:45:17 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:53.563 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:53.563 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@377 -- # modinfo irdma 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:53.563 Found net devices under 0000:af:00.0: cvl_0_0 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:53.563 Found net devices under 0000:af:00.1: cvl_0_1 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:53.563 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:17:53.823 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:53.823 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:53.823 altname enp175s0f0np0 00:17:53.823 altname ens801f0np0 00:17:53.823 inet 192.168.100.8/24 scope global cvl_0_0 00:17:53.823 valid_lft forever preferred_lft forever 00:17:53.823 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:53.823 valid_lft forever preferred_lft forever 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:17:53.823 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:53.823 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:53.823 altname enp175s0f1np1 00:17:53.823 altname ens801f1np1 00:17:53.823 inet 192.168.100.9/24 scope global cvl_0_1 00:17:53.823 valid_lft forever preferred_lft forever 00:17:53.823 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:53.823 valid_lft forever preferred_lft forever 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:17:53.823 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:53.824 192.168.100.9' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:53.824 192.168.100.9' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:53.824 192.168.100.9' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4083320 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4083320 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 4083320 ']' 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:53.824 10:45:22 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:53.824 [2024-06-10 10:45:22.791235] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:53.824 [2024-06-10 10:45:22.791284] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.824 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.824 [2024-06-10 10:45:22.851668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.082 [2024-06-10 10:45:22.929347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.082 [2024-06-10 10:45:22.929383] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.082 [2024-06-10 10:45:22.929390] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.082 [2024-06-10 10:45:22.929396] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.083 [2024-06-10 10:45:22.929401] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.083 [2024-06-10 10:45:22.929514] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:17:54.083 [2024-06-10 10:45:22.929635] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:17:54.083 [2024-06-10 10:45:22.930105] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.083 [2024-06-10 10:45:22.930105] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:17:54.650 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:54.651 [2024-06-10 10:45:23.654097] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x237f1d0/0x237e810) succeed. 00:17:54.651 [2024-06-10 10:45:23.662995] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x2380580/0x237ed90) succeed. 00:17:54.651 [2024-06-10 10:45:23.663014] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.651 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:54.910 Malloc0 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:54.910 [2024-06-10 10:45:23.717806] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:54.910 { 00:17:54.910 "params": { 00:17:54.910 "name": "Nvme$subsystem", 00:17:54.910 "trtype": "$TEST_TRANSPORT", 00:17:54.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.910 "adrfam": "ipv4", 00:17:54.910 "trsvcid": "$NVMF_PORT", 00:17:54.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.910 "hdgst": ${hdgst:-false}, 00:17:54.910 "ddgst": ${ddgst:-false} 00:17:54.910 }, 00:17:54.910 "method": "bdev_nvme_attach_controller" 00:17:54.910 } 00:17:54.910 EOF 00:17:54.910 )") 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:54.910 10:45:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:54.910 "params": { 00:17:54.910 "name": "Nvme1", 00:17:54.910 "trtype": "rdma", 00:17:54.910 "traddr": "192.168.100.8", 00:17:54.910 "adrfam": "ipv4", 00:17:54.910 "trsvcid": "4420", 00:17:54.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.910 "hdgst": false, 00:17:54.910 "ddgst": false 00:17:54.910 }, 00:17:54.910 "method": "bdev_nvme_attach_controller" 00:17:54.910 }' 00:17:54.910 [2024-06-10 10:45:23.762725] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:54.910 [2024-06-10 10:45:23.762771] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083458 ] 00:17:54.910 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.910 [2024-06-10 10:45:23.822978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:54.910 [2024-06-10 10:45:23.896541] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.910 [2024-06-10 10:45:23.896638] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.910 [2024-06-10 10:45:23.896639] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.169 I/O targets: 00:17:55.169 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:55.169 00:17:55.169 00:17:55.169 CUnit - A unit testing framework for C - Version 2.1-3 00:17:55.169 http://cunit.sourceforge.net/ 00:17:55.169 00:17:55.169 00:17:55.169 Suite: bdevio tests on: Nvme1n1 00:17:55.169 Test: blockdev write read block ...passed 00:17:55.169 Test: blockdev write zeroes read block ...passed 00:17:55.169 Test: blockdev write zeroes read no split ...passed 00:17:55.169 Test: blockdev write zeroes read split ...passed 00:17:55.169 Test: blockdev write zeroes read split partial ...passed 00:17:55.169 Test: blockdev reset ...[2024-06-10 10:45:24.092379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.169 [2024-06-10 10:45:24.115251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:55.169 [2024-06-10 10:45:24.143998] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:55.169 passed 00:17:55.169 Test: blockdev write read 8 blocks ...passed 00:17:55.169 Test: blockdev write read size > 128k ...passed 00:17:55.169 Test: blockdev write read invalid size ...passed 00:17:55.169 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:55.169 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:55.170 Test: blockdev write read max offset ...passed 00:17:55.170 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:55.170 Test: blockdev writev readv 8 blocks ...passed 00:17:55.170 Test: blockdev writev readv 30 x 1block ...passed 00:17:55.170 Test: blockdev writev readv block ...passed 00:17:55.170 Test: blockdev writev readv size > 128k ...passed 00:17:55.170 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:55.170 Test: blockdev comparev and writev ...[2024-06-10 10:45:24.147305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.170 [2024-06-10 10:45:24.147331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.147340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.170 [2024-06-10 10:45:24.147348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.147528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.170 [2024-06-10 10:45:24.147536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.147544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.170 [2024-06-10 10:45:24.147551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.147710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.170 [2024-06-10 10:45:24.147718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.147725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.170 [2024-06-10 10:45:24.147732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.147914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.170 [2024-06-10 10:45:24.147922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.147930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.170 [2024-06-10 10:45:24.147936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.170 passed 00:17:55.170 Test: blockdev nvme passthru rw ...passed 00:17:55.170 Test: blockdev nvme passthru vendor specific ...[2024-06-10 10:45:24.148233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:55.170 [2024-06-10 10:45:24.148243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.148294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:55.170 [2024-06-10 10:45:24.148301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.148349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:55.170 [2024-06-10 10:45:24.148356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:55.170 [2024-06-10 10:45:24.148405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:55.170 [2024-06-10 10:45:24.148412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:55.170 passed 00:17:55.170 Test: blockdev nvme admin passthru ...passed 00:17:55.170 Test: blockdev copy ...passed 00:17:55.170 00:17:55.170 Run Summary: Type Total Ran Passed Failed Inactive 00:17:55.170 suites 1 1 n/a 0 0 00:17:55.170 tests 23 23 23 0 0 00:17:55.170 asserts 152 152 152 0 n/a 00:17:55.170 00:17:55.170 Elapsed time = 0.180 seconds 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:55.429 rmmod nvme_rdma 00:17:55.429 rmmod nvme_fabrics 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4083320 ']' 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4083320 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 4083320 ']' 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 4083320 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4083320 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4083320' 00:17:55.429 killing process with pid 4083320 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 4083320 00:17:55.429 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 4083320 00:17:55.688 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.688 10:45:24 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:55.688 00:17:55.688 real 0m7.686s 00:17:55.688 user 0m9.559s 00:17:55.688 sys 0m4.852s 00:17:55.688 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:55.688 10:45:24 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:55.688 ************************************ 00:17:55.688 END TEST nvmf_bdevio 00:17:55.688 ************************************ 00:17:55.688 10:45:24 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:17:55.688 10:45:24 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:55.688 10:45:24 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:55.688 10:45:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:55.947 ************************************ 00:17:55.947 START TEST nvmf_auth_target 00:17:55.947 ************************************ 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:17:55.947 * Looking for test storage... 00:17:55.947 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:55.947 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.948 10:45:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.221 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:01.222 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:01.222 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@377 -- # modinfo irdma 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:01.222 Found net devices under 0000:af:00.0: cvl_0_0 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:01.222 Found net devices under 0000:af:00.1: cvl_0_1 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:18:01.222 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:01.222 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:18:01.222 altname enp175s0f0np0 00:18:01.222 altname ens801f0np0 00:18:01.222 inet 192.168.100.8/24 scope global cvl_0_0 00:18:01.222 valid_lft forever preferred_lft forever 00:18:01.222 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:18:01.222 valid_lft forever preferred_lft forever 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:18:01.222 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:18:01.222 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:18:01.222 altname enp175s0f1np1 00:18:01.222 altname ens801f1np1 00:18:01.222 inet 192.168.100.9/24 scope global cvl_0_1 00:18:01.222 valid_lft forever preferred_lft forever 00:18:01.222 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:18:01.222 valid_lft forever preferred_lft forever 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:01.222 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:01.482 192.168.100.9' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:01.482 192.168.100.9' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:01.482 192.168.100.9' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4086997 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4086997 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 4086997 ']' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:01.482 10:45:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4087239 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1b1ed7e15980fd86525bc813870a938f13096a5c3ae8321d 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8K6 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1b1ed7e15980fd86525bc813870a938f13096a5c3ae8321d 0 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1b1ed7e15980fd86525bc813870a938f13096a5c3ae8321d 0 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1b1ed7e15980fd86525bc813870a938f13096a5c3ae8321d 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8K6 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8K6 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.8K6 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:02.418 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f948c748f30c66660352e2a287dcb891c5fcb83440817720b8f1cd89c67bc783 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wJF 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f948c748f30c66660352e2a287dcb891c5fcb83440817720b8f1cd89c67bc783 3 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f948c748f30c66660352e2a287dcb891c5fcb83440817720b8f1cd89c67bc783 3 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f948c748f30c66660352e2a287dcb891c5fcb83440817720b8f1cd89c67bc783 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wJF 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wJF 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.wJF 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=741bea542b161db4625efaac0b202519 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HjT 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 741bea542b161db4625efaac0b202519 1 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 741bea542b161db4625efaac0b202519 1 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=741bea542b161db4625efaac0b202519 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HjT 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HjT 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.HjT 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=acfe8954fc57af12d344b6f423f68caaf352d0ae2e58b46f 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8En 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key acfe8954fc57af12d344b6f423f68caaf352d0ae2e58b46f 2 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 acfe8954fc57af12d344b6f423f68caaf352d0ae2e58b46f 2 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=acfe8954fc57af12d344b6f423f68caaf352d0ae2e58b46f 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:02.419 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8En 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8En 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.8En 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=88f31a47c04e5cba8f4d1a826df295a3c9f73c9b499676f4 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DwQ 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 88f31a47c04e5cba8f4d1a826df295a3c9f73c9b499676f4 2 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 88f31a47c04e5cba8f4d1a826df295a3c9f73c9b499676f4 2 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=88f31a47c04e5cba8f4d1a826df295a3c9f73c9b499676f4 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DwQ 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DwQ 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.DwQ 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6194920f567b6e3c6334db738e80afc0 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6us 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6194920f567b6e3c6334db738e80afc0 1 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6194920f567b6e3c6334db738e80afc0 1 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6194920f567b6e3c6334db738e80afc0 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6us 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6us 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.6us 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4db7bd14c22204959a915ee2723527721f64fbce59933544dbebb598a4810a94 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tkp 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4db7bd14c22204959a915ee2723527721f64fbce59933544dbebb598a4810a94 3 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4db7bd14c22204959a915ee2723527721f64fbce59933544dbebb598a4810a94 3 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4db7bd14c22204959a915ee2723527721f64fbce59933544dbebb598a4810a94 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tkp 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tkp 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.tkp 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4086997 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 4086997 ']' 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.678 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:02.679 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4087239 /var/tmp/host.sock 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 4087239 ']' 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:02.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:02.937 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.196 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:03.196 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:18:03.196 10:45:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:03.196 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.196 10:45:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8K6 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8K6 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8K6 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.wJF ]] 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wJF 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.196 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wJF 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wJF 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HjT 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HjT 00:18:03.455 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HjT 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.8En ]] 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8En 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8En 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8En 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DwQ 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.DwQ 00:18:03.713 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.DwQ 00:18:03.971 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.6us ]] 00:18:03.971 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6us 00:18:03.971 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.971 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.971 10:45:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.971 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6us 00:18:03.971 10:45:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6us 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tkp 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tkp 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tkp 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.229 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.487 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.746 00:18:04.746 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.746 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.746 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.005 { 00:18:05.005 "cntlid": 1, 00:18:05.005 "qid": 0, 00:18:05.005 "state": "enabled", 00:18:05.005 "listen_address": { 00:18:05.005 "trtype": "RDMA", 00:18:05.005 "adrfam": "IPv4", 00:18:05.005 "traddr": "192.168.100.8", 00:18:05.005 "trsvcid": "4420" 00:18:05.005 }, 00:18:05.005 "peer_address": { 00:18:05.005 "trtype": "RDMA", 00:18:05.005 "adrfam": "IPv4", 00:18:05.005 "traddr": "192.168.100.8", 00:18:05.005 "trsvcid": "50890" 00:18:05.005 }, 00:18:05.005 "auth": { 00:18:05.005 "state": "completed", 00:18:05.005 "digest": "sha256", 00:18:05.005 "dhgroup": "null" 00:18:05.005 } 00:18:05.005 } 00:18:05.005 ]' 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.005 10:45:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.264 10:45:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:05.831 10:45:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.831 10:45:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:05.831 10:45:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.831 10:45:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.831 10:45:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.831 10:45:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.831 10:45:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.831 10:45:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.090 10:45:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.091 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.091 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.350 00:18:06.350 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.350 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.350 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.609 { 00:18:06.609 "cntlid": 3, 00:18:06.609 "qid": 0, 00:18:06.609 "state": "enabled", 00:18:06.609 "listen_address": { 00:18:06.609 "trtype": "RDMA", 00:18:06.609 "adrfam": "IPv4", 00:18:06.609 "traddr": "192.168.100.8", 00:18:06.609 "trsvcid": "4420" 00:18:06.609 }, 00:18:06.609 "peer_address": { 00:18:06.609 "trtype": "RDMA", 00:18:06.609 "adrfam": "IPv4", 00:18:06.609 "traddr": "192.168.100.8", 00:18:06.609 "trsvcid": "48965" 00:18:06.609 }, 00:18:06.609 "auth": { 00:18:06.609 "state": "completed", 00:18:06.609 "digest": "sha256", 00:18:06.609 "dhgroup": "null" 00:18:06.609 } 00:18:06.609 } 00:18:06.609 ]' 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.609 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.868 10:45:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:18:07.435 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.435 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:07.435 10:45:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.435 10:45:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.435 10:45:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.435 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.435 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.435 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.726 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.985 00:18:07.985 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.985 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.985 10:45:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.244 { 00:18:08.244 "cntlid": 5, 00:18:08.244 "qid": 0, 00:18:08.244 "state": "enabled", 00:18:08.244 "listen_address": { 00:18:08.244 "trtype": "RDMA", 00:18:08.244 "adrfam": "IPv4", 00:18:08.244 "traddr": "192.168.100.8", 00:18:08.244 "trsvcid": "4420" 00:18:08.244 }, 00:18:08.244 "peer_address": { 00:18:08.244 "trtype": "RDMA", 00:18:08.244 "adrfam": "IPv4", 00:18:08.244 "traddr": "192.168.100.8", 00:18:08.244 "trsvcid": "45703" 00:18:08.244 }, 00:18:08.244 "auth": { 00:18:08.244 "state": "completed", 00:18:08.244 "digest": "sha256", 00:18:08.244 "dhgroup": "null" 00:18:08.244 } 00:18:08.244 } 00:18:08.244 ]' 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.244 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.503 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:18:09.071 10:45:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.071 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:09.071 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.071 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.071 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.071 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.071 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.071 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.330 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.589 00:18:09.589 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.589 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.589 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.849 { 00:18:09.849 "cntlid": 7, 00:18:09.849 "qid": 0, 00:18:09.849 "state": "enabled", 00:18:09.849 "listen_address": { 00:18:09.849 "trtype": "RDMA", 00:18:09.849 "adrfam": "IPv4", 00:18:09.849 "traddr": "192.168.100.8", 00:18:09.849 "trsvcid": "4420" 00:18:09.849 }, 00:18:09.849 "peer_address": { 00:18:09.849 "trtype": "RDMA", 00:18:09.849 "adrfam": "IPv4", 00:18:09.849 "traddr": "192.168.100.8", 00:18:09.849 "trsvcid": "35800" 00:18:09.849 }, 00:18:09.849 "auth": { 00:18:09.849 "state": "completed", 00:18:09.849 "digest": "sha256", 00:18:09.849 "dhgroup": "null" 00:18:09.849 } 00:18:09.849 } 00:18:09.849 ]' 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.849 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.108 10:45:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.675 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.933 10:45:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.192 00:18:11.192 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.192 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.192 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.451 { 00:18:11.451 "cntlid": 9, 00:18:11.451 "qid": 0, 00:18:11.451 "state": "enabled", 00:18:11.451 "listen_address": { 00:18:11.451 "trtype": "RDMA", 00:18:11.451 "adrfam": "IPv4", 00:18:11.451 "traddr": "192.168.100.8", 00:18:11.451 "trsvcid": "4420" 00:18:11.451 }, 00:18:11.451 "peer_address": { 00:18:11.451 "trtype": "RDMA", 00:18:11.451 "adrfam": "IPv4", 00:18:11.451 "traddr": "192.168.100.8", 00:18:11.451 "trsvcid": "55175" 00:18:11.451 }, 00:18:11.451 "auth": { 00:18:11.451 "state": "completed", 00:18:11.451 "digest": "sha256", 00:18:11.451 "dhgroup": "ffdhe2048" 00:18:11.451 } 00:18:11.451 } 00:18:11.451 ]' 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.451 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.710 10:45:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:12.278 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.278 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:12.278 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.278 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.278 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.278 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.278 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.278 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.536 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.537 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.537 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.537 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.796 00:18:12.796 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.796 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.796 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.055 { 00:18:13.055 "cntlid": 11, 00:18:13.055 "qid": 0, 00:18:13.055 "state": "enabled", 00:18:13.055 "listen_address": { 00:18:13.055 "trtype": "RDMA", 00:18:13.055 "adrfam": "IPv4", 00:18:13.055 "traddr": "192.168.100.8", 00:18:13.055 "trsvcid": "4420" 00:18:13.055 }, 00:18:13.055 "peer_address": { 00:18:13.055 "trtype": "RDMA", 00:18:13.055 "adrfam": "IPv4", 00:18:13.055 "traddr": "192.168.100.8", 00:18:13.055 "trsvcid": "58189" 00:18:13.055 }, 00:18:13.055 "auth": { 00:18:13.055 "state": "completed", 00:18:13.055 "digest": "sha256", 00:18:13.055 "dhgroup": "ffdhe2048" 00:18:13.055 } 00:18:13.055 } 00:18:13.055 ]' 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.055 10:45:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.313 10:45:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:18:13.879 10:45:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.880 10:45:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:13.880 10:45:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.880 10:45:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.880 10:45:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.880 10:45:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.880 10:45:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.880 10:45:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.138 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.397 00:18:14.397 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.397 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.397 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.655 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.655 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.655 10:45:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.655 10:45:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.655 10:45:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.655 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.655 { 00:18:14.655 "cntlid": 13, 00:18:14.655 "qid": 0, 00:18:14.655 "state": "enabled", 00:18:14.655 "listen_address": { 00:18:14.655 "trtype": "RDMA", 00:18:14.655 "adrfam": "IPv4", 00:18:14.655 "traddr": "192.168.100.8", 00:18:14.656 "trsvcid": "4420" 00:18:14.656 }, 00:18:14.656 "peer_address": { 00:18:14.656 "trtype": "RDMA", 00:18:14.656 "adrfam": "IPv4", 00:18:14.656 "traddr": "192.168.100.8", 00:18:14.656 "trsvcid": "45118" 00:18:14.656 }, 00:18:14.656 "auth": { 00:18:14.656 "state": "completed", 00:18:14.656 "digest": "sha256", 00:18:14.656 "dhgroup": "ffdhe2048" 00:18:14.656 } 00:18:14.656 } 00:18:14.656 ]' 00:18:14.656 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.656 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.656 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.656 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.656 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.656 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.656 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.656 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.914 10:45:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:18:15.482 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.482 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:15.482 10:45:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.482 10:45:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.741 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.000 00:18:16.000 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.000 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.000 10:45:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.259 { 00:18:16.259 "cntlid": 15, 00:18:16.259 "qid": 0, 00:18:16.259 "state": "enabled", 00:18:16.259 "listen_address": { 00:18:16.259 "trtype": "RDMA", 00:18:16.259 "adrfam": "IPv4", 00:18:16.259 "traddr": "192.168.100.8", 00:18:16.259 "trsvcid": "4420" 00:18:16.259 }, 00:18:16.259 "peer_address": { 00:18:16.259 "trtype": "RDMA", 00:18:16.259 "adrfam": "IPv4", 00:18:16.259 "traddr": "192.168.100.8", 00:18:16.259 "trsvcid": "42185" 00:18:16.259 }, 00:18:16.259 "auth": { 00:18:16.259 "state": "completed", 00:18:16.259 "digest": "sha256", 00:18:16.259 "dhgroup": "ffdhe2048" 00:18:16.259 } 00:18:16.259 } 00:18:16.259 ]' 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.259 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.518 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.518 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:18:17.087 10:45:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.346 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.605 00:18:17.605 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.605 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.605 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.864 { 00:18:17.864 "cntlid": 17, 00:18:17.864 "qid": 0, 00:18:17.864 "state": "enabled", 00:18:17.864 "listen_address": { 00:18:17.864 "trtype": "RDMA", 00:18:17.864 "adrfam": "IPv4", 00:18:17.864 "traddr": "192.168.100.8", 00:18:17.864 "trsvcid": "4420" 00:18:17.864 }, 00:18:17.864 "peer_address": { 00:18:17.864 "trtype": "RDMA", 00:18:17.864 "adrfam": "IPv4", 00:18:17.864 "traddr": "192.168.100.8", 00:18:17.864 "trsvcid": "47242" 00:18:17.864 }, 00:18:17.864 "auth": { 00:18:17.864 "state": "completed", 00:18:17.864 "digest": "sha256", 00:18:17.864 "dhgroup": "ffdhe3072" 00:18:17.864 } 00:18:17.864 } 00:18:17.864 ]' 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.864 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.123 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.123 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.123 10:45:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.123 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:18.690 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.949 10:45:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.208 00:18:19.208 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.208 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.208 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.467 { 00:18:19.467 "cntlid": 19, 00:18:19.467 "qid": 0, 00:18:19.467 "state": "enabled", 00:18:19.467 "listen_address": { 00:18:19.467 "trtype": "RDMA", 00:18:19.467 "adrfam": "IPv4", 00:18:19.467 "traddr": "192.168.100.8", 00:18:19.467 "trsvcid": "4420" 00:18:19.467 }, 00:18:19.467 "peer_address": { 00:18:19.467 "trtype": "RDMA", 00:18:19.467 "adrfam": "IPv4", 00:18:19.467 "traddr": "192.168.100.8", 00:18:19.467 "trsvcid": "54679" 00:18:19.467 }, 00:18:19.467 "auth": { 00:18:19.467 "state": "completed", 00:18:19.467 "digest": "sha256", 00:18:19.467 "dhgroup": "ffdhe3072" 00:18:19.467 } 00:18:19.467 } 00:18:19.467 ]' 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.467 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.726 10:45:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:18:20.294 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.553 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.812 00:18:20.812 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.812 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.812 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.070 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.070 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.070 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.070 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.070 10:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.070 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.070 { 00:18:21.070 "cntlid": 21, 00:18:21.070 "qid": 0, 00:18:21.070 "state": "enabled", 00:18:21.070 "listen_address": { 00:18:21.070 "trtype": "RDMA", 00:18:21.070 "adrfam": "IPv4", 00:18:21.070 "traddr": "192.168.100.8", 00:18:21.071 "trsvcid": "4420" 00:18:21.071 }, 00:18:21.071 "peer_address": { 00:18:21.071 "trtype": "RDMA", 00:18:21.071 "adrfam": "IPv4", 00:18:21.071 "traddr": "192.168.100.8", 00:18:21.071 "trsvcid": "36934" 00:18:21.071 }, 00:18:21.071 "auth": { 00:18:21.071 "state": "completed", 00:18:21.071 "digest": "sha256", 00:18:21.071 "dhgroup": "ffdhe3072" 00:18:21.071 } 00:18:21.071 } 00:18:21.071 ]' 00:18:21.071 10:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.071 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.071 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.071 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.071 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.071 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.071 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.071 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.329 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:18:21.896 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.156 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:22.156 10:45:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.156 10:45:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.156 10:45:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.156 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.156 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.156 10:45:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.156 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.415 00:18:22.415 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.415 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.415 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.674 { 00:18:22.674 "cntlid": 23, 00:18:22.674 "qid": 0, 00:18:22.674 "state": "enabled", 00:18:22.674 "listen_address": { 00:18:22.674 "trtype": "RDMA", 00:18:22.674 "adrfam": "IPv4", 00:18:22.674 "traddr": "192.168.100.8", 00:18:22.674 "trsvcid": "4420" 00:18:22.674 }, 00:18:22.674 "peer_address": { 00:18:22.674 "trtype": "RDMA", 00:18:22.674 "adrfam": "IPv4", 00:18:22.674 "traddr": "192.168.100.8", 00:18:22.674 "trsvcid": "60362" 00:18:22.674 }, 00:18:22.674 "auth": { 00:18:22.674 "state": "completed", 00:18:22.674 "digest": "sha256", 00:18:22.674 "dhgroup": "ffdhe3072" 00:18:22.674 } 00:18:22.674 } 00:18:22.674 ]' 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.674 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.933 10:45:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:18:23.500 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.759 10:45:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.017 00:18:24.017 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.017 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.017 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.276 { 00:18:24.276 "cntlid": 25, 00:18:24.276 "qid": 0, 00:18:24.276 "state": "enabled", 00:18:24.276 "listen_address": { 00:18:24.276 "trtype": "RDMA", 00:18:24.276 "adrfam": "IPv4", 00:18:24.276 "traddr": "192.168.100.8", 00:18:24.276 "trsvcid": "4420" 00:18:24.276 }, 00:18:24.276 "peer_address": { 00:18:24.276 "trtype": "RDMA", 00:18:24.276 "adrfam": "IPv4", 00:18:24.276 "traddr": "192.168.100.8", 00:18:24.276 "trsvcid": "42099" 00:18:24.276 }, 00:18:24.276 "auth": { 00:18:24.276 "state": "completed", 00:18:24.276 "digest": "sha256", 00:18:24.276 "dhgroup": "ffdhe4096" 00:18:24.276 } 00:18:24.276 } 00:18:24.276 ]' 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.276 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.535 10:45:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:25.102 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.418 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.676 00:18:25.676 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.676 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.676 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.935 { 00:18:25.935 "cntlid": 27, 00:18:25.935 "qid": 0, 00:18:25.935 "state": "enabled", 00:18:25.935 "listen_address": { 00:18:25.935 "trtype": "RDMA", 00:18:25.935 "adrfam": "IPv4", 00:18:25.935 "traddr": "192.168.100.8", 00:18:25.935 "trsvcid": "4420" 00:18:25.935 }, 00:18:25.935 "peer_address": { 00:18:25.935 "trtype": "RDMA", 00:18:25.935 "adrfam": "IPv4", 00:18:25.935 "traddr": "192.168.100.8", 00:18:25.935 "trsvcid": "49971" 00:18:25.935 }, 00:18:25.935 "auth": { 00:18:25.935 "state": "completed", 00:18:25.935 "digest": "sha256", 00:18:25.935 "dhgroup": "ffdhe4096" 00:18:25.935 } 00:18:25.935 } 00:18:25.935 ]' 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.935 10:45:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.193 10:45:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:18:26.760 10:45:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.019 10:45:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:27.019 10:45:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.019 10:45:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.019 10:45:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.019 10:45:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.019 10:45:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.019 10:45:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.019 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.277 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.536 { 00:18:27.536 "cntlid": 29, 00:18:27.536 "qid": 0, 00:18:27.536 "state": "enabled", 00:18:27.536 "listen_address": { 00:18:27.536 "trtype": "RDMA", 00:18:27.536 "adrfam": "IPv4", 00:18:27.536 "traddr": "192.168.100.8", 00:18:27.536 "trsvcid": "4420" 00:18:27.536 }, 00:18:27.536 "peer_address": { 00:18:27.536 "trtype": "RDMA", 00:18:27.536 "adrfam": "IPv4", 00:18:27.536 "traddr": "192.168.100.8", 00:18:27.536 "trsvcid": "39175" 00:18:27.536 }, 00:18:27.536 "auth": { 00:18:27.536 "state": "completed", 00:18:27.536 "digest": "sha256", 00:18:27.536 "dhgroup": "ffdhe4096" 00:18:27.536 } 00:18:27.536 } 00:18:27.536 ]' 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.536 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.795 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.795 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.795 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.795 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.795 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.795 10:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:18:28.362 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.621 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:28.621 10:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.621 10:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.621 10:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.621 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.621 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.621 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.879 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.138 00:18:29.138 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.138 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.138 10:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.138 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.138 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.138 10:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.138 10:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.138 10:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.138 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.138 { 00:18:29.138 "cntlid": 31, 00:18:29.139 "qid": 0, 00:18:29.139 "state": "enabled", 00:18:29.139 "listen_address": { 00:18:29.139 "trtype": "RDMA", 00:18:29.139 "adrfam": "IPv4", 00:18:29.139 "traddr": "192.168.100.8", 00:18:29.139 "trsvcid": "4420" 00:18:29.139 }, 00:18:29.139 "peer_address": { 00:18:29.139 "trtype": "RDMA", 00:18:29.139 "adrfam": "IPv4", 00:18:29.139 "traddr": "192.168.100.8", 00:18:29.139 "trsvcid": "52133" 00:18:29.139 }, 00:18:29.139 "auth": { 00:18:29.139 "state": "completed", 00:18:29.139 "digest": "sha256", 00:18:29.139 "dhgroup": "ffdhe4096" 00:18:29.139 } 00:18:29.139 } 00:18:29.139 ]' 00:18:29.139 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.397 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.397 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.397 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.397 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.397 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.397 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.398 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.656 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:18:30.224 10:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.224 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:30.224 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.224 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.224 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.224 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.224 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.224 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:30.224 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.481 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.738 00:18:30.738 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.738 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.738 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.996 { 00:18:30.996 "cntlid": 33, 00:18:30.996 "qid": 0, 00:18:30.996 "state": "enabled", 00:18:30.996 "listen_address": { 00:18:30.996 "trtype": "RDMA", 00:18:30.996 "adrfam": "IPv4", 00:18:30.996 "traddr": "192.168.100.8", 00:18:30.996 "trsvcid": "4420" 00:18:30.996 }, 00:18:30.996 "peer_address": { 00:18:30.996 "trtype": "RDMA", 00:18:30.996 "adrfam": "IPv4", 00:18:30.996 "traddr": "192.168.100.8", 00:18:30.996 "trsvcid": "56980" 00:18:30.996 }, 00:18:30.996 "auth": { 00:18:30.996 "state": "completed", 00:18:30.996 "digest": "sha256", 00:18:30.996 "dhgroup": "ffdhe6144" 00:18:30.996 } 00:18:30.996 } 00:18:30.996 ]' 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.996 10:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.255 10:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:31.828 10:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.087 10:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:32.087 10:46:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.087 10:46:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.087 10:46:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.087 10:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.087 10:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.087 10:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.087 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.654 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.654 { 00:18:32.654 "cntlid": 35, 00:18:32.654 "qid": 0, 00:18:32.654 "state": "enabled", 00:18:32.654 "listen_address": { 00:18:32.654 "trtype": "RDMA", 00:18:32.654 "adrfam": "IPv4", 00:18:32.654 "traddr": "192.168.100.8", 00:18:32.654 "trsvcid": "4420" 00:18:32.654 }, 00:18:32.654 "peer_address": { 00:18:32.654 "trtype": "RDMA", 00:18:32.654 "adrfam": "IPv4", 00:18:32.654 "traddr": "192.168.100.8", 00:18:32.654 "trsvcid": "38658" 00:18:32.654 }, 00:18:32.654 "auth": { 00:18:32.654 "state": "completed", 00:18:32.654 "digest": "sha256", 00:18:32.654 "dhgroup": "ffdhe6144" 00:18:32.654 } 00:18:32.654 } 00:18:32.654 ]' 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.654 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.913 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.913 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.913 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.913 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.913 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.913 10:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:18:33.482 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.741 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:33.741 10:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.741 10:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.741 10:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.741 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.741 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.741 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.000 10:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.259 00:18:34.259 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.259 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.259 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.517 { 00:18:34.517 "cntlid": 37, 00:18:34.517 "qid": 0, 00:18:34.517 "state": "enabled", 00:18:34.517 "listen_address": { 00:18:34.517 "trtype": "RDMA", 00:18:34.517 "adrfam": "IPv4", 00:18:34.517 "traddr": "192.168.100.8", 00:18:34.517 "trsvcid": "4420" 00:18:34.517 }, 00:18:34.517 "peer_address": { 00:18:34.517 "trtype": "RDMA", 00:18:34.517 "adrfam": "IPv4", 00:18:34.517 "traddr": "192.168.100.8", 00:18:34.517 "trsvcid": "43697" 00:18:34.517 }, 00:18:34.517 "auth": { 00:18:34.517 "state": "completed", 00:18:34.517 "digest": "sha256", 00:18:34.517 "dhgroup": "ffdhe6144" 00:18:34.517 } 00:18:34.517 } 00:18:34.517 ]' 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.517 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.776 10:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:18:35.343 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.600 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.601 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.176 00:18:36.176 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.176 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.176 10:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.176 { 00:18:36.176 "cntlid": 39, 00:18:36.176 "qid": 0, 00:18:36.176 "state": "enabled", 00:18:36.176 "listen_address": { 00:18:36.176 "trtype": "RDMA", 00:18:36.176 "adrfam": "IPv4", 00:18:36.176 "traddr": "192.168.100.8", 00:18:36.176 "trsvcid": "4420" 00:18:36.176 }, 00:18:36.176 "peer_address": { 00:18:36.176 "trtype": "RDMA", 00:18:36.176 "adrfam": "IPv4", 00:18:36.176 "traddr": "192.168.100.8", 00:18:36.176 "trsvcid": "50328" 00:18:36.176 }, 00:18:36.176 "auth": { 00:18:36.176 "state": "completed", 00:18:36.176 "digest": "sha256", 00:18:36.176 "dhgroup": "ffdhe6144" 00:18:36.176 } 00:18:36.176 } 00:18:36.176 ]' 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.176 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.434 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.434 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.434 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.434 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:18:37.001 10:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.261 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:37.520 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.520 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.520 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.520 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.520 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.520 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.520 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.520 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.778 00:18:37.778 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.778 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.779 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.037 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.037 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.037 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.037 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.037 10:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.037 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.037 { 00:18:38.037 "cntlid": 41, 00:18:38.037 "qid": 0, 00:18:38.037 "state": "enabled", 00:18:38.037 "listen_address": { 00:18:38.037 "trtype": "RDMA", 00:18:38.037 "adrfam": "IPv4", 00:18:38.037 "traddr": "192.168.100.8", 00:18:38.037 "trsvcid": "4420" 00:18:38.037 }, 00:18:38.037 "peer_address": { 00:18:38.037 "trtype": "RDMA", 00:18:38.037 "adrfam": "IPv4", 00:18:38.037 "traddr": "192.168.100.8", 00:18:38.037 "trsvcid": "44128" 00:18:38.037 }, 00:18:38.037 "auth": { 00:18:38.037 "state": "completed", 00:18:38.037 "digest": "sha256", 00:18:38.037 "dhgroup": "ffdhe8192" 00:18:38.037 } 00:18:38.037 } 00:18:38.037 ]' 00:18:38.037 10:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.037 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.037 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.037 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.037 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.296 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.296 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.296 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.296 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:38.864 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.123 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:39.123 10:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.123 10:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.123 10:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.123 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.123 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.123 10:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.382 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.640 00:18:39.640 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.640 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.640 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.898 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.898 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.898 10:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.898 10:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.898 10:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.898 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.898 { 00:18:39.898 "cntlid": 43, 00:18:39.899 "qid": 0, 00:18:39.899 "state": "enabled", 00:18:39.899 "listen_address": { 00:18:39.899 "trtype": "RDMA", 00:18:39.899 "adrfam": "IPv4", 00:18:39.899 "traddr": "192.168.100.8", 00:18:39.899 "trsvcid": "4420" 00:18:39.899 }, 00:18:39.899 "peer_address": { 00:18:39.899 "trtype": "RDMA", 00:18:39.899 "adrfam": "IPv4", 00:18:39.899 "traddr": "192.168.100.8", 00:18:39.899 "trsvcid": "43567" 00:18:39.899 }, 00:18:39.899 "auth": { 00:18:39.899 "state": "completed", 00:18:39.899 "digest": "sha256", 00:18:39.899 "dhgroup": "ffdhe8192" 00:18:39.899 } 00:18:39.899 } 00:18:39.899 ]' 00:18:39.899 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.899 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.899 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.899 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.899 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.157 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.157 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.157 10:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.157 10:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:18:40.725 10:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.984 10:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:40.984 10:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.984 10:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.984 10:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.984 10:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.984 10:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.984 10:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.243 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.878 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.878 { 00:18:41.878 "cntlid": 45, 00:18:41.878 "qid": 0, 00:18:41.878 "state": "enabled", 00:18:41.878 "listen_address": { 00:18:41.878 "trtype": "RDMA", 00:18:41.878 "adrfam": "IPv4", 00:18:41.878 "traddr": "192.168.100.8", 00:18:41.878 "trsvcid": "4420" 00:18:41.878 }, 00:18:41.878 "peer_address": { 00:18:41.878 "trtype": "RDMA", 00:18:41.878 "adrfam": "IPv4", 00:18:41.878 "traddr": "192.168.100.8", 00:18:41.878 "trsvcid": "59413" 00:18:41.878 }, 00:18:41.878 "auth": { 00:18:41.878 "state": "completed", 00:18:41.878 "digest": "sha256", 00:18:41.878 "dhgroup": "ffdhe8192" 00:18:41.878 } 00:18:41.878 } 00:18:41.878 ]' 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.878 10:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.137 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:18:42.703 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.963 10:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.530 00:18:43.530 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.530 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.530 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.789 { 00:18:43.789 "cntlid": 47, 00:18:43.789 "qid": 0, 00:18:43.789 "state": "enabled", 00:18:43.789 "listen_address": { 00:18:43.789 "trtype": "RDMA", 00:18:43.789 "adrfam": "IPv4", 00:18:43.789 "traddr": "192.168.100.8", 00:18:43.789 "trsvcid": "4420" 00:18:43.789 }, 00:18:43.789 "peer_address": { 00:18:43.789 "trtype": "RDMA", 00:18:43.789 "adrfam": "IPv4", 00:18:43.789 "traddr": "192.168.100.8", 00:18:43.789 "trsvcid": "55990" 00:18:43.789 }, 00:18:43.789 "auth": { 00:18:43.789 "state": "completed", 00:18:43.789 "digest": "sha256", 00:18:43.789 "dhgroup": "ffdhe8192" 00:18:43.789 } 00:18:43.789 } 00:18:43.789 ]' 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.789 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.048 10:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.615 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.875 10:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.133 00:18:45.133 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.133 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.133 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.392 { 00:18:45.392 "cntlid": 49, 00:18:45.392 "qid": 0, 00:18:45.392 "state": "enabled", 00:18:45.392 "listen_address": { 00:18:45.392 "trtype": "RDMA", 00:18:45.392 "adrfam": "IPv4", 00:18:45.392 "traddr": "192.168.100.8", 00:18:45.392 "trsvcid": "4420" 00:18:45.392 }, 00:18:45.392 "peer_address": { 00:18:45.392 "trtype": "RDMA", 00:18:45.392 "adrfam": "IPv4", 00:18:45.392 "traddr": "192.168.100.8", 00:18:45.392 "trsvcid": "49808" 00:18:45.392 }, 00:18:45.392 "auth": { 00:18:45.392 "state": "completed", 00:18:45.392 "digest": "sha384", 00:18:45.392 "dhgroup": "null" 00:18:45.392 } 00:18:45.392 } 00:18:45.392 ]' 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.392 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.651 10:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:46.217 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.217 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:46.217 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.218 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.218 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.218 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.218 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.218 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.477 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.736 00:18:46.736 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.736 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.736 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.995 { 00:18:46.995 "cntlid": 51, 00:18:46.995 "qid": 0, 00:18:46.995 "state": "enabled", 00:18:46.995 "listen_address": { 00:18:46.995 "trtype": "RDMA", 00:18:46.995 "adrfam": "IPv4", 00:18:46.995 "traddr": "192.168.100.8", 00:18:46.995 "trsvcid": "4420" 00:18:46.995 }, 00:18:46.995 "peer_address": { 00:18:46.995 "trtype": "RDMA", 00:18:46.995 "adrfam": "IPv4", 00:18:46.995 "traddr": "192.168.100.8", 00:18:46.995 "trsvcid": "48924" 00:18:46.995 }, 00:18:46.995 "auth": { 00:18:46.995 "state": "completed", 00:18:46.995 "digest": "sha384", 00:18:46.995 "dhgroup": "null" 00:18:46.995 } 00:18:46.995 } 00:18:46.995 ]' 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.995 10:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.254 10:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:18:47.822 10:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.822 10:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:47.823 10:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.823 10:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.823 10:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.823 10:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.823 10:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:47.823 10:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.081 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:48.081 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.081 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.081 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.081 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.081 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.082 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.082 10:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.082 10:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.082 10:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.082 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.082 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.340 00:18:48.340 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.340 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.340 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.599 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.599 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.599 10:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.599 10:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.599 10:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.599 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.599 { 00:18:48.599 "cntlid": 53, 00:18:48.599 "qid": 0, 00:18:48.599 "state": "enabled", 00:18:48.599 "listen_address": { 00:18:48.599 "trtype": "RDMA", 00:18:48.599 "adrfam": "IPv4", 00:18:48.599 "traddr": "192.168.100.8", 00:18:48.599 "trsvcid": "4420" 00:18:48.599 }, 00:18:48.599 "peer_address": { 00:18:48.599 "trtype": "RDMA", 00:18:48.599 "adrfam": "IPv4", 00:18:48.599 "traddr": "192.168.100.8", 00:18:48.599 "trsvcid": "45358" 00:18:48.599 }, 00:18:48.599 "auth": { 00:18:48.599 "state": "completed", 00:18:48.599 "digest": "sha384", 00:18:48.599 "dhgroup": "null" 00:18:48.599 } 00:18:48.599 } 00:18:48.600 ]' 00:18:48.600 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.600 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.600 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.600 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:48.600 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.600 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.600 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.600 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.858 10:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:18:49.426 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.686 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.945 00:18:49.945 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.945 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.945 10:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.204 { 00:18:50.204 "cntlid": 55, 00:18:50.204 "qid": 0, 00:18:50.204 "state": "enabled", 00:18:50.204 "listen_address": { 00:18:50.204 "trtype": "RDMA", 00:18:50.204 "adrfam": "IPv4", 00:18:50.204 "traddr": "192.168.100.8", 00:18:50.204 "trsvcid": "4420" 00:18:50.204 }, 00:18:50.204 "peer_address": { 00:18:50.204 "trtype": "RDMA", 00:18:50.204 "adrfam": "IPv4", 00:18:50.204 "traddr": "192.168.100.8", 00:18:50.204 "trsvcid": "56352" 00:18:50.204 }, 00:18:50.204 "auth": { 00:18:50.204 "state": "completed", 00:18:50.204 "digest": "sha384", 00:18:50.204 "dhgroup": "null" 00:18:50.204 } 00:18:50.204 } 00:18:50.204 ]' 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.204 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.463 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:18:51.031 10:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.031 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:51.031 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.031 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.290 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.549 00:18:51.549 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.549 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.549 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.808 { 00:18:51.808 "cntlid": 57, 00:18:51.808 "qid": 0, 00:18:51.808 "state": "enabled", 00:18:51.808 "listen_address": { 00:18:51.808 "trtype": "RDMA", 00:18:51.808 "adrfam": "IPv4", 00:18:51.808 "traddr": "192.168.100.8", 00:18:51.808 "trsvcid": "4420" 00:18:51.808 }, 00:18:51.808 "peer_address": { 00:18:51.808 "trtype": "RDMA", 00:18:51.808 "adrfam": "IPv4", 00:18:51.808 "traddr": "192.168.100.8", 00:18:51.808 "trsvcid": "45664" 00:18:51.808 }, 00:18:51.808 "auth": { 00:18:51.808 "state": "completed", 00:18:51.808 "digest": "sha384", 00:18:51.808 "dhgroup": "ffdhe2048" 00:18:51.808 } 00:18:51.808 } 00:18:51.808 ]' 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.808 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.067 10:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:52.634 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.893 10:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.152 00:18:53.152 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.152 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.152 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.410 { 00:18:53.410 "cntlid": 59, 00:18:53.410 "qid": 0, 00:18:53.410 "state": "enabled", 00:18:53.410 "listen_address": { 00:18:53.410 "trtype": "RDMA", 00:18:53.410 "adrfam": "IPv4", 00:18:53.410 "traddr": "192.168.100.8", 00:18:53.410 "trsvcid": "4420" 00:18:53.410 }, 00:18:53.410 "peer_address": { 00:18:53.410 "trtype": "RDMA", 00:18:53.410 "adrfam": "IPv4", 00:18:53.410 "traddr": "192.168.100.8", 00:18:53.410 "trsvcid": "45320" 00:18:53.410 }, 00:18:53.410 "auth": { 00:18:53.410 "state": "completed", 00:18:53.410 "digest": "sha384", 00:18:53.410 "dhgroup": "ffdhe2048" 00:18:53.410 } 00:18:53.410 } 00:18:53.410 ]' 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.410 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.668 10:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:18:54.235 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.493 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.494 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.752 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.752 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.752 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.752 00:18:54.752 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.752 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.752 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.009 { 00:18:55.009 "cntlid": 61, 00:18:55.009 "qid": 0, 00:18:55.009 "state": "enabled", 00:18:55.009 "listen_address": { 00:18:55.009 "trtype": "RDMA", 00:18:55.009 "adrfam": "IPv4", 00:18:55.009 "traddr": "192.168.100.8", 00:18:55.009 "trsvcid": "4420" 00:18:55.009 }, 00:18:55.009 "peer_address": { 00:18:55.009 "trtype": "RDMA", 00:18:55.009 "adrfam": "IPv4", 00:18:55.009 "traddr": "192.168.100.8", 00:18:55.009 "trsvcid": "46261" 00:18:55.009 }, 00:18:55.009 "auth": { 00:18:55.009 "state": "completed", 00:18:55.009 "digest": "sha384", 00:18:55.009 "dhgroup": "ffdhe2048" 00:18:55.009 } 00:18:55.009 } 00:18:55.009 ]' 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.009 10:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.009 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.009 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.267 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.267 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.267 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.267 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:18:55.833 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.091 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:56.091 10:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.091 10:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.091 10:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.091 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.091 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.091 10:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.349 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.349 00:18:56.607 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.607 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.608 { 00:18:56.608 "cntlid": 63, 00:18:56.608 "qid": 0, 00:18:56.608 "state": "enabled", 00:18:56.608 "listen_address": { 00:18:56.608 "trtype": "RDMA", 00:18:56.608 "adrfam": "IPv4", 00:18:56.608 "traddr": "192.168.100.8", 00:18:56.608 "trsvcid": "4420" 00:18:56.608 }, 00:18:56.608 "peer_address": { 00:18:56.608 "trtype": "RDMA", 00:18:56.608 "adrfam": "IPv4", 00:18:56.608 "traddr": "192.168.100.8", 00:18:56.608 "trsvcid": "49242" 00:18:56.608 }, 00:18:56.608 "auth": { 00:18:56.608 "state": "completed", 00:18:56.608 "digest": "sha384", 00:18:56.608 "dhgroup": "ffdhe2048" 00:18:56.608 } 00:18:56.608 } 00:18:56.608 ]' 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.608 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.866 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.866 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.866 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.866 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.866 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.124 10:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:18:57.690 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.690 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:57.690 10:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.690 10:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.690 10:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.690 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.691 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.691 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.691 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.949 10:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.209 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.209 { 00:18:58.209 "cntlid": 65, 00:18:58.209 "qid": 0, 00:18:58.209 "state": "enabled", 00:18:58.209 "listen_address": { 00:18:58.209 "trtype": "RDMA", 00:18:58.209 "adrfam": "IPv4", 00:18:58.209 "traddr": "192.168.100.8", 00:18:58.209 "trsvcid": "4420" 00:18:58.209 }, 00:18:58.209 "peer_address": { 00:18:58.209 "trtype": "RDMA", 00:18:58.209 "adrfam": "IPv4", 00:18:58.209 "traddr": "192.168.100.8", 00:18:58.209 "trsvcid": "37897" 00:18:58.209 }, 00:18:58.209 "auth": { 00:18:58.209 "state": "completed", 00:18:58.209 "digest": "sha384", 00:18:58.209 "dhgroup": "ffdhe3072" 00:18:58.209 } 00:18:58.209 } 00:18:58.209 ]' 00:18:58.209 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.511 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.511 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.511 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.511 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.511 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.511 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.511 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.791 10:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:18:59.050 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.308 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:59.308 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.308 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.308 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.308 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.308 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.308 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.567 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.826 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.826 { 00:18:59.826 "cntlid": 67, 00:18:59.826 "qid": 0, 00:18:59.826 "state": "enabled", 00:18:59.826 "listen_address": { 00:18:59.826 "trtype": "RDMA", 00:18:59.826 "adrfam": "IPv4", 00:18:59.826 "traddr": "192.168.100.8", 00:18:59.826 "trsvcid": "4420" 00:18:59.826 }, 00:18:59.826 "peer_address": { 00:18:59.826 "trtype": "RDMA", 00:18:59.826 "adrfam": "IPv4", 00:18:59.826 "traddr": "192.168.100.8", 00:18:59.826 "trsvcid": "33822" 00:18:59.826 }, 00:18:59.826 "auth": { 00:18:59.826 "state": "completed", 00:18:59.826 "digest": "sha384", 00:18:59.826 "dhgroup": "ffdhe3072" 00:18:59.826 } 00:18:59.826 } 00:18:59.826 ]' 00:18:59.826 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.085 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.085 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.085 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.085 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.085 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.085 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.085 10:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.344 10:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:00.911 10:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.911 10:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:00.911 10:46:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.911 10:46:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.912 10:46:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.912 10:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.912 10:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.912 10:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.170 10:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.171 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.171 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.429 00:19:01.429 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.429 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.429 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.429 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.429 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.429 10:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.430 10:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.689 { 00:19:01.689 "cntlid": 69, 00:19:01.689 "qid": 0, 00:19:01.689 "state": "enabled", 00:19:01.689 "listen_address": { 00:19:01.689 "trtype": "RDMA", 00:19:01.689 "adrfam": "IPv4", 00:19:01.689 "traddr": "192.168.100.8", 00:19:01.689 "trsvcid": "4420" 00:19:01.689 }, 00:19:01.689 "peer_address": { 00:19:01.689 "trtype": "RDMA", 00:19:01.689 "adrfam": "IPv4", 00:19:01.689 "traddr": "192.168.100.8", 00:19:01.689 "trsvcid": "56600" 00:19:01.689 }, 00:19:01.689 "auth": { 00:19:01.689 "state": "completed", 00:19:01.689 "digest": "sha384", 00:19:01.689 "dhgroup": "ffdhe3072" 00:19:01.689 } 00:19:01.689 } 00:19:01.689 ]' 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.689 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.948 10:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:02.516 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.516 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:02.516 10:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.516 10:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.516 10:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.516 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.516 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:02.516 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.775 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.033 00:19:03.033 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.033 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.033 10:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.033 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.291 { 00:19:03.291 "cntlid": 71, 00:19:03.291 "qid": 0, 00:19:03.291 "state": "enabled", 00:19:03.291 "listen_address": { 00:19:03.291 "trtype": "RDMA", 00:19:03.291 "adrfam": "IPv4", 00:19:03.291 "traddr": "192.168.100.8", 00:19:03.291 "trsvcid": "4420" 00:19:03.291 }, 00:19:03.291 "peer_address": { 00:19:03.291 "trtype": "RDMA", 00:19:03.291 "adrfam": "IPv4", 00:19:03.291 "traddr": "192.168.100.8", 00:19:03.291 "trsvcid": "43178" 00:19:03.291 }, 00:19:03.291 "auth": { 00:19:03.291 "state": "completed", 00:19:03.291 "digest": "sha384", 00:19:03.291 "dhgroup": "ffdhe3072" 00:19:03.291 } 00:19:03.291 } 00:19:03.291 ]' 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.291 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.548 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:04.116 10:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.116 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:04.116 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.116 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.116 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.116 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.116 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.116 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:04.116 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.375 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.634 00:19:04.634 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.634 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.634 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.893 { 00:19:04.893 "cntlid": 73, 00:19:04.893 "qid": 0, 00:19:04.893 "state": "enabled", 00:19:04.893 "listen_address": { 00:19:04.893 "trtype": "RDMA", 00:19:04.893 "adrfam": "IPv4", 00:19:04.893 "traddr": "192.168.100.8", 00:19:04.893 "trsvcid": "4420" 00:19:04.893 }, 00:19:04.893 "peer_address": { 00:19:04.893 "trtype": "RDMA", 00:19:04.893 "adrfam": "IPv4", 00:19:04.893 "traddr": "192.168.100.8", 00:19:04.893 "trsvcid": "47448" 00:19:04.893 }, 00:19:04.893 "auth": { 00:19:04.893 "state": "completed", 00:19:04.893 "digest": "sha384", 00:19:04.893 "dhgroup": "ffdhe4096" 00:19:04.893 } 00:19:04.893 } 00:19:04.893 ]' 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.893 10:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.152 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:05.720 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.979 10:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.238 00:19:06.238 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.238 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.238 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.497 { 00:19:06.497 "cntlid": 75, 00:19:06.497 "qid": 0, 00:19:06.497 "state": "enabled", 00:19:06.497 "listen_address": { 00:19:06.497 "trtype": "RDMA", 00:19:06.497 "adrfam": "IPv4", 00:19:06.497 "traddr": "192.168.100.8", 00:19:06.497 "trsvcid": "4420" 00:19:06.497 }, 00:19:06.497 "peer_address": { 00:19:06.497 "trtype": "RDMA", 00:19:06.497 "adrfam": "IPv4", 00:19:06.497 "traddr": "192.168.100.8", 00:19:06.497 "trsvcid": "49808" 00:19:06.497 }, 00:19:06.497 "auth": { 00:19:06.497 "state": "completed", 00:19:06.497 "digest": "sha384", 00:19:06.497 "dhgroup": "ffdhe4096" 00:19:06.497 } 00:19:06.497 } 00:19:06.497 ]' 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.497 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.755 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.755 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.756 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.756 10:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:07.323 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.582 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.842 00:19:07.842 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.842 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.842 10:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.102 { 00:19:08.102 "cntlid": 77, 00:19:08.102 "qid": 0, 00:19:08.102 "state": "enabled", 00:19:08.102 "listen_address": { 00:19:08.102 "trtype": "RDMA", 00:19:08.102 "adrfam": "IPv4", 00:19:08.102 "traddr": "192.168.100.8", 00:19:08.102 "trsvcid": "4420" 00:19:08.102 }, 00:19:08.102 "peer_address": { 00:19:08.102 "trtype": "RDMA", 00:19:08.102 "adrfam": "IPv4", 00:19:08.102 "traddr": "192.168.100.8", 00:19:08.102 "trsvcid": "49027" 00:19:08.102 }, 00:19:08.102 "auth": { 00:19:08.102 "state": "completed", 00:19:08.102 "digest": "sha384", 00:19:08.102 "dhgroup": "ffdhe4096" 00:19:08.102 } 00:19:08.102 } 00:19:08.102 ]' 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.102 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.361 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.361 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.361 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.361 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.361 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.361 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:08.928 10:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.186 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:09.186 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.186 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.186 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.186 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.186 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.186 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.445 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.704 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.704 { 00:19:09.704 "cntlid": 79, 00:19:09.704 "qid": 0, 00:19:09.704 "state": "enabled", 00:19:09.704 "listen_address": { 00:19:09.704 "trtype": "RDMA", 00:19:09.704 "adrfam": "IPv4", 00:19:09.704 "traddr": "192.168.100.8", 00:19:09.704 "trsvcid": "4420" 00:19:09.704 }, 00:19:09.704 "peer_address": { 00:19:09.704 "trtype": "RDMA", 00:19:09.704 "adrfam": "IPv4", 00:19:09.704 "traddr": "192.168.100.8", 00:19:09.704 "trsvcid": "58542" 00:19:09.704 }, 00:19:09.704 "auth": { 00:19:09.704 "state": "completed", 00:19:09.704 "digest": "sha384", 00:19:09.704 "dhgroup": "ffdhe4096" 00:19:09.704 } 00:19:09.704 } 00:19:09.704 ]' 00:19:09.704 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.962 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.962 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.962 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.962 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.962 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.962 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.962 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.962 10:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:10.527 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.785 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:10.785 10:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.785 10:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.785 10:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.785 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.785 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.785 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.785 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.044 10:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.303 00:19:11.303 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.303 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.303 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.562 { 00:19:11.562 "cntlid": 81, 00:19:11.562 "qid": 0, 00:19:11.562 "state": "enabled", 00:19:11.562 "listen_address": { 00:19:11.562 "trtype": "RDMA", 00:19:11.562 "adrfam": "IPv4", 00:19:11.562 "traddr": "192.168.100.8", 00:19:11.562 "trsvcid": "4420" 00:19:11.562 }, 00:19:11.562 "peer_address": { 00:19:11.562 "trtype": "RDMA", 00:19:11.562 "adrfam": "IPv4", 00:19:11.562 "traddr": "192.168.100.8", 00:19:11.562 "trsvcid": "52507" 00:19:11.562 }, 00:19:11.562 "auth": { 00:19:11.562 "state": "completed", 00:19:11.562 "digest": "sha384", 00:19:11.562 "dhgroup": "ffdhe6144" 00:19:11.562 } 00:19:11.562 } 00:19:11.562 ]' 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.562 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.821 10:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:12.388 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.388 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:12.388 10:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.388 10:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.388 10:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.388 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.388 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.388 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.647 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.906 00:19:13.164 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.164 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.164 10:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.164 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.164 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.164 10:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.164 10:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.164 10:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.165 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.165 { 00:19:13.165 "cntlid": 83, 00:19:13.165 "qid": 0, 00:19:13.165 "state": "enabled", 00:19:13.165 "listen_address": { 00:19:13.165 "trtype": "RDMA", 00:19:13.165 "adrfam": "IPv4", 00:19:13.165 "traddr": "192.168.100.8", 00:19:13.165 "trsvcid": "4420" 00:19:13.165 }, 00:19:13.165 "peer_address": { 00:19:13.165 "trtype": "RDMA", 00:19:13.165 "adrfam": "IPv4", 00:19:13.165 "traddr": "192.168.100.8", 00:19:13.165 "trsvcid": "36896" 00:19:13.165 }, 00:19:13.165 "auth": { 00:19:13.165 "state": "completed", 00:19:13.165 "digest": "sha384", 00:19:13.165 "dhgroup": "ffdhe6144" 00:19:13.165 } 00:19:13.165 } 00:19:13.165 ]' 00:19:13.165 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.165 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.165 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.423 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.423 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.423 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.423 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.423 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.423 10:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:13.991 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.249 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:14.249 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.249 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.249 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.249 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.249 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.249 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.508 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.767 00:19:14.767 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.767 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.767 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.057 { 00:19:15.057 "cntlid": 85, 00:19:15.057 "qid": 0, 00:19:15.057 "state": "enabled", 00:19:15.057 "listen_address": { 00:19:15.057 "trtype": "RDMA", 00:19:15.057 "adrfam": "IPv4", 00:19:15.057 "traddr": "192.168.100.8", 00:19:15.057 "trsvcid": "4420" 00:19:15.057 }, 00:19:15.057 "peer_address": { 00:19:15.057 "trtype": "RDMA", 00:19:15.057 "adrfam": "IPv4", 00:19:15.057 "traddr": "192.168.100.8", 00:19:15.057 "trsvcid": "52996" 00:19:15.057 }, 00:19:15.057 "auth": { 00:19:15.057 "state": "completed", 00:19:15.057 "digest": "sha384", 00:19:15.057 "dhgroup": "ffdhe6144" 00:19:15.057 } 00:19:15.057 } 00:19:15.057 ]' 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.057 10:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.317 10:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:15.884 10:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.884 10:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:15.884 10:46:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.884 10:46:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.884 10:46:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.884 10:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.884 10:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:15.884 10:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.142 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:16.142 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.142 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.142 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:16.142 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:16.143 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.143 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:16.143 10:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.143 10:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.143 10:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.143 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.143 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.401 00:19:16.401 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.401 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.401 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.660 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.660 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.660 10:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.660 10:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.660 10:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.660 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.660 { 00:19:16.660 "cntlid": 87, 00:19:16.660 "qid": 0, 00:19:16.660 "state": "enabled", 00:19:16.660 "listen_address": { 00:19:16.660 "trtype": "RDMA", 00:19:16.660 "adrfam": "IPv4", 00:19:16.661 "traddr": "192.168.100.8", 00:19:16.661 "trsvcid": "4420" 00:19:16.661 }, 00:19:16.661 "peer_address": { 00:19:16.661 "trtype": "RDMA", 00:19:16.661 "adrfam": "IPv4", 00:19:16.661 "traddr": "192.168.100.8", 00:19:16.661 "trsvcid": "39531" 00:19:16.661 }, 00:19:16.661 "auth": { 00:19:16.661 "state": "completed", 00:19:16.661 "digest": "sha384", 00:19:16.661 "dhgroup": "ffdhe6144" 00:19:16.661 } 00:19:16.661 } 00:19:16.661 ]' 00:19:16.661 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.661 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.661 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.661 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.661 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.661 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.661 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.661 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.920 10:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:17.488 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.747 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:17.747 10:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.747 10:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.747 10:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.748 10:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.315 00:19:18.315 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.315 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.315 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.574 { 00:19:18.574 "cntlid": 89, 00:19:18.574 "qid": 0, 00:19:18.574 "state": "enabled", 00:19:18.574 "listen_address": { 00:19:18.574 "trtype": "RDMA", 00:19:18.574 "adrfam": "IPv4", 00:19:18.574 "traddr": "192.168.100.8", 00:19:18.574 "trsvcid": "4420" 00:19:18.574 }, 00:19:18.574 "peer_address": { 00:19:18.574 "trtype": "RDMA", 00:19:18.574 "adrfam": "IPv4", 00:19:18.574 "traddr": "192.168.100.8", 00:19:18.574 "trsvcid": "44881" 00:19:18.574 }, 00:19:18.574 "auth": { 00:19:18.574 "state": "completed", 00:19:18.574 "digest": "sha384", 00:19:18.574 "dhgroup": "ffdhe8192" 00:19:18.574 } 00:19:18.574 } 00:19:18.574 ]' 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.574 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.575 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.575 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.833 10:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:19.400 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.400 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:19.400 10:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.400 10:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.400 10:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.400 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.400 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:19.400 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.659 10:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.227 00:19:20.227 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.227 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.227 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.227 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.227 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.227 10:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.227 10:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.488 { 00:19:20.488 "cntlid": 91, 00:19:20.488 "qid": 0, 00:19:20.488 "state": "enabled", 00:19:20.488 "listen_address": { 00:19:20.488 "trtype": "RDMA", 00:19:20.488 "adrfam": "IPv4", 00:19:20.488 "traddr": "192.168.100.8", 00:19:20.488 "trsvcid": "4420" 00:19:20.488 }, 00:19:20.488 "peer_address": { 00:19:20.488 "trtype": "RDMA", 00:19:20.488 "adrfam": "IPv4", 00:19:20.488 "traddr": "192.168.100.8", 00:19:20.488 "trsvcid": "55694" 00:19:20.488 }, 00:19:20.488 "auth": { 00:19:20.488 "state": "completed", 00:19:20.488 "digest": "sha384", 00:19:20.488 "dhgroup": "ffdhe8192" 00:19:20.488 } 00:19:20.488 } 00:19:20.488 ]' 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.488 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.746 10:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:21.312 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.312 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:21.312 10:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.312 10:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.312 10:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.312 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.312 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.312 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.570 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.137 00:19:22.137 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.137 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.137 10:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.137 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.137 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.137 10:46:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.137 10:46:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.137 10:46:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.138 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.138 { 00:19:22.138 "cntlid": 93, 00:19:22.138 "qid": 0, 00:19:22.138 "state": "enabled", 00:19:22.138 "listen_address": { 00:19:22.138 "trtype": "RDMA", 00:19:22.138 "adrfam": "IPv4", 00:19:22.138 "traddr": "192.168.100.8", 00:19:22.138 "trsvcid": "4420" 00:19:22.138 }, 00:19:22.138 "peer_address": { 00:19:22.138 "trtype": "RDMA", 00:19:22.138 "adrfam": "IPv4", 00:19:22.138 "traddr": "192.168.100.8", 00:19:22.138 "trsvcid": "53193" 00:19:22.138 }, 00:19:22.138 "auth": { 00:19:22.138 "state": "completed", 00:19:22.138 "digest": "sha384", 00:19:22.138 "dhgroup": "ffdhe8192" 00:19:22.138 } 00:19:22.138 } 00:19:22.138 ]' 00:19:22.138 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.138 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.138 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.395 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.395 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.395 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.395 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.395 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.396 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:22.960 10:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.218 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:23.218 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.218 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.218 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.218 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.218 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.218 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.476 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.477 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.477 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.477 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.735 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.993 { 00:19:23.993 "cntlid": 95, 00:19:23.993 "qid": 0, 00:19:23.993 "state": "enabled", 00:19:23.993 "listen_address": { 00:19:23.993 "trtype": "RDMA", 00:19:23.993 "adrfam": "IPv4", 00:19:23.993 "traddr": "192.168.100.8", 00:19:23.993 "trsvcid": "4420" 00:19:23.993 }, 00:19:23.993 "peer_address": { 00:19:23.993 "trtype": "RDMA", 00:19:23.993 "adrfam": "IPv4", 00:19:23.993 "traddr": "192.168.100.8", 00:19:23.993 "trsvcid": "44909" 00:19:23.993 }, 00:19:23.993 "auth": { 00:19:23.993 "state": "completed", 00:19:23.993 "digest": "sha384", 00:19:23.993 "dhgroup": "ffdhe8192" 00:19:23.993 } 00:19:23.993 } 00:19:23.993 ]' 00:19:23.993 10:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.993 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.993 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.250 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.251 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.251 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.251 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.251 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.251 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:24.818 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.077 10:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.336 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.595 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.595 { 00:19:25.595 "cntlid": 97, 00:19:25.595 "qid": 0, 00:19:25.595 "state": "enabled", 00:19:25.595 "listen_address": { 00:19:25.595 "trtype": "RDMA", 00:19:25.595 "adrfam": "IPv4", 00:19:25.595 "traddr": "192.168.100.8", 00:19:25.595 "trsvcid": "4420" 00:19:25.595 }, 00:19:25.595 "peer_address": { 00:19:25.595 "trtype": "RDMA", 00:19:25.595 "adrfam": "IPv4", 00:19:25.595 "traddr": "192.168.100.8", 00:19:25.595 "trsvcid": "37431" 00:19:25.595 }, 00:19:25.595 "auth": { 00:19:25.595 "state": "completed", 00:19:25.595 "digest": "sha512", 00:19:25.595 "dhgroup": "null" 00:19:25.595 } 00:19:25.595 } 00:19:25.595 ]' 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.595 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.854 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:25.854 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.854 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.854 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.854 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.854 10:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:26.422 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.681 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:26.681 10:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.681 10:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.681 10:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.681 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.681 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:26.681 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.940 10:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.198 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.198 { 00:19:27.198 "cntlid": 99, 00:19:27.198 "qid": 0, 00:19:27.198 "state": "enabled", 00:19:27.198 "listen_address": { 00:19:27.198 "trtype": "RDMA", 00:19:27.198 "adrfam": "IPv4", 00:19:27.198 "traddr": "192.168.100.8", 00:19:27.198 "trsvcid": "4420" 00:19:27.198 }, 00:19:27.198 "peer_address": { 00:19:27.198 "trtype": "RDMA", 00:19:27.198 "adrfam": "IPv4", 00:19:27.198 "traddr": "192.168.100.8", 00:19:27.198 "trsvcid": "35403" 00:19:27.198 }, 00:19:27.198 "auth": { 00:19:27.198 "state": "completed", 00:19:27.198 "digest": "sha512", 00:19:27.198 "dhgroup": "null" 00:19:27.198 } 00:19:27.198 } 00:19:27.198 ]' 00:19:27.198 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.457 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.457 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.457 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:27.457 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.457 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.457 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.457 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.457 10:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:28.024 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.283 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:28.283 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.283 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.283 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.283 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.283 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.283 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.542 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.801 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.801 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.801 { 00:19:28.801 "cntlid": 101, 00:19:28.801 "qid": 0, 00:19:28.801 "state": "enabled", 00:19:28.801 "listen_address": { 00:19:28.801 "trtype": "RDMA", 00:19:28.801 "adrfam": "IPv4", 00:19:28.801 "traddr": "192.168.100.8", 00:19:28.802 "trsvcid": "4420" 00:19:28.802 }, 00:19:28.802 "peer_address": { 00:19:28.802 "trtype": "RDMA", 00:19:28.802 "adrfam": "IPv4", 00:19:28.802 "traddr": "192.168.100.8", 00:19:28.802 "trsvcid": "45437" 00:19:28.802 }, 00:19:28.802 "auth": { 00:19:28.802 "state": "completed", 00:19:28.802 "digest": "sha512", 00:19:28.802 "dhgroup": "null" 00:19:28.802 } 00:19:28.802 } 00:19:28.802 ]' 00:19:28.802 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.061 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.061 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.061 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:29.061 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.061 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.061 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.061 10:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.320 10:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:29.888 10:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.888 10:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:29.888 10:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.888 10:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.888 10:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.888 10:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.888 10:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:29.888 10:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.147 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.406 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.406 { 00:19:30.406 "cntlid": 103, 00:19:30.406 "qid": 0, 00:19:30.406 "state": "enabled", 00:19:30.406 "listen_address": { 00:19:30.406 "trtype": "RDMA", 00:19:30.406 "adrfam": "IPv4", 00:19:30.406 "traddr": "192.168.100.8", 00:19:30.406 "trsvcid": "4420" 00:19:30.406 }, 00:19:30.406 "peer_address": { 00:19:30.406 "trtype": "RDMA", 00:19:30.406 "adrfam": "IPv4", 00:19:30.406 "traddr": "192.168.100.8", 00:19:30.406 "trsvcid": "48086" 00:19:30.406 }, 00:19:30.406 "auth": { 00:19:30.406 "state": "completed", 00:19:30.406 "digest": "sha512", 00:19:30.406 "dhgroup": "null" 00:19:30.406 } 00:19:30.406 } 00:19:30.406 ]' 00:19:30.406 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.665 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.665 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.665 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:30.665 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.665 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.665 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.665 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.924 10:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.490 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.780 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.047 00:19:32.047 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.047 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.047 10:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.047 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.047 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.047 10:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.047 10:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.047 10:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.047 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.047 { 00:19:32.047 "cntlid": 105, 00:19:32.047 "qid": 0, 00:19:32.048 "state": "enabled", 00:19:32.048 "listen_address": { 00:19:32.048 "trtype": "RDMA", 00:19:32.048 "adrfam": "IPv4", 00:19:32.048 "traddr": "192.168.100.8", 00:19:32.048 "trsvcid": "4420" 00:19:32.048 }, 00:19:32.048 "peer_address": { 00:19:32.048 "trtype": "RDMA", 00:19:32.048 "adrfam": "IPv4", 00:19:32.048 "traddr": "192.168.100.8", 00:19:32.048 "trsvcid": "46519" 00:19:32.048 }, 00:19:32.048 "auth": { 00:19:32.048 "state": "completed", 00:19:32.048 "digest": "sha512", 00:19:32.048 "dhgroup": "ffdhe2048" 00:19:32.048 } 00:19:32.048 } 00:19:32.048 ]' 00:19:32.048 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.048 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.306 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.307 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.307 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.307 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.307 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.307 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.565 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:33.132 10:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.132 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:33.132 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.132 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.132 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.132 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.132 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.132 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.390 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.650 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.650 { 00:19:33.650 "cntlid": 107, 00:19:33.650 "qid": 0, 00:19:33.650 "state": "enabled", 00:19:33.650 "listen_address": { 00:19:33.650 "trtype": "RDMA", 00:19:33.650 "adrfam": "IPv4", 00:19:33.650 "traddr": "192.168.100.8", 00:19:33.650 "trsvcid": "4420" 00:19:33.650 }, 00:19:33.650 "peer_address": { 00:19:33.650 "trtype": "RDMA", 00:19:33.650 "adrfam": "IPv4", 00:19:33.650 "traddr": "192.168.100.8", 00:19:33.650 "trsvcid": "49830" 00:19:33.650 }, 00:19:33.650 "auth": { 00:19:33.650 "state": "completed", 00:19:33.650 "digest": "sha512", 00:19:33.650 "dhgroup": "ffdhe2048" 00:19:33.650 } 00:19:33.650 } 00:19:33.650 ]' 00:19:33.650 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.909 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.909 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.909 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.909 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.909 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.909 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.909 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.167 10:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:34.734 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.734 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:34.734 10:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.734 10:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.734 10:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.734 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.734 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.734 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.993 10:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.252 00:19:35.252 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.252 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.252 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.252 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.252 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.252 10:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.252 10:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.511 { 00:19:35.511 "cntlid": 109, 00:19:35.511 "qid": 0, 00:19:35.511 "state": "enabled", 00:19:35.511 "listen_address": { 00:19:35.511 "trtype": "RDMA", 00:19:35.511 "adrfam": "IPv4", 00:19:35.511 "traddr": "192.168.100.8", 00:19:35.511 "trsvcid": "4420" 00:19:35.511 }, 00:19:35.511 "peer_address": { 00:19:35.511 "trtype": "RDMA", 00:19:35.511 "adrfam": "IPv4", 00:19:35.511 "traddr": "192.168.100.8", 00:19:35.511 "trsvcid": "33207" 00:19:35.511 }, 00:19:35.511 "auth": { 00:19:35.511 "state": "completed", 00:19:35.511 "digest": "sha512", 00:19:35.511 "dhgroup": "ffdhe2048" 00:19:35.511 } 00:19:35.511 } 00:19:35.511 ]' 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.511 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.769 10:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:36.336 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.336 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:36.336 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.336 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.336 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.336 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.336 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.336 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.595 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.853 00:19:36.853 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.853 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.853 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.112 { 00:19:37.112 "cntlid": 111, 00:19:37.112 "qid": 0, 00:19:37.112 "state": "enabled", 00:19:37.112 "listen_address": { 00:19:37.112 "trtype": "RDMA", 00:19:37.112 "adrfam": "IPv4", 00:19:37.112 "traddr": "192.168.100.8", 00:19:37.112 "trsvcid": "4420" 00:19:37.112 }, 00:19:37.112 "peer_address": { 00:19:37.112 "trtype": "RDMA", 00:19:37.112 "adrfam": "IPv4", 00:19:37.112 "traddr": "192.168.100.8", 00:19:37.112 "trsvcid": "47127" 00:19:37.112 }, 00:19:37.112 "auth": { 00:19:37.112 "state": "completed", 00:19:37.112 "digest": "sha512", 00:19:37.112 "dhgroup": "ffdhe2048" 00:19:37.112 } 00:19:37.112 } 00:19:37.112 ]' 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.112 10:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.112 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.112 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.112 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.371 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.939 10:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.197 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.456 00:19:38.456 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.456 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.456 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.716 { 00:19:38.716 "cntlid": 113, 00:19:38.716 "qid": 0, 00:19:38.716 "state": "enabled", 00:19:38.716 "listen_address": { 00:19:38.716 "trtype": "RDMA", 00:19:38.716 "adrfam": "IPv4", 00:19:38.716 "traddr": "192.168.100.8", 00:19:38.716 "trsvcid": "4420" 00:19:38.716 }, 00:19:38.716 "peer_address": { 00:19:38.716 "trtype": "RDMA", 00:19:38.716 "adrfam": "IPv4", 00:19:38.716 "traddr": "192.168.100.8", 00:19:38.716 "trsvcid": "47126" 00:19:38.716 }, 00:19:38.716 "auth": { 00:19:38.716 "state": "completed", 00:19:38.716 "digest": "sha512", 00:19:38.716 "dhgroup": "ffdhe3072" 00:19:38.716 } 00:19:38.716 } 00:19:38.716 ]' 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.716 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.975 10:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:39.542 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.801 10:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.060 00:19:40.060 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.060 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.060 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.318 { 00:19:40.318 "cntlid": 115, 00:19:40.318 "qid": 0, 00:19:40.318 "state": "enabled", 00:19:40.318 "listen_address": { 00:19:40.318 "trtype": "RDMA", 00:19:40.318 "adrfam": "IPv4", 00:19:40.318 "traddr": "192.168.100.8", 00:19:40.318 "trsvcid": "4420" 00:19:40.318 }, 00:19:40.318 "peer_address": { 00:19:40.318 "trtype": "RDMA", 00:19:40.318 "adrfam": "IPv4", 00:19:40.318 "traddr": "192.168.100.8", 00:19:40.318 "trsvcid": "51027" 00:19:40.318 }, 00:19:40.318 "auth": { 00:19:40.318 "state": "completed", 00:19:40.318 "digest": "sha512", 00:19:40.318 "dhgroup": "ffdhe3072" 00:19:40.318 } 00:19:40.318 } 00:19:40.318 ]' 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.318 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.577 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.577 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.577 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.577 10:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:41.145 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.404 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.663 00:19:41.663 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.663 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.663 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.921 { 00:19:41.921 "cntlid": 117, 00:19:41.921 "qid": 0, 00:19:41.921 "state": "enabled", 00:19:41.921 "listen_address": { 00:19:41.921 "trtype": "RDMA", 00:19:41.921 "adrfam": "IPv4", 00:19:41.921 "traddr": "192.168.100.8", 00:19:41.921 "trsvcid": "4420" 00:19:41.921 }, 00:19:41.921 "peer_address": { 00:19:41.921 "trtype": "RDMA", 00:19:41.921 "adrfam": "IPv4", 00:19:41.921 "traddr": "192.168.100.8", 00:19:41.921 "trsvcid": "57884" 00:19:41.921 }, 00:19:41.921 "auth": { 00:19:41.921 "state": "completed", 00:19:41.921 "digest": "sha512", 00:19:41.921 "dhgroup": "ffdhe3072" 00:19:41.921 } 00:19:41.921 } 00:19:41.921 ]' 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.921 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.180 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.180 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.180 10:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.180 10:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:42.748 10:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.007 10:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:43.007 10:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.007 10:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.007 10:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.007 10:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.007 10:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:43.007 10:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:43.266 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:43.266 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.266 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.266 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:43.267 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.267 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.267 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:43.267 10:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.267 10:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.267 10:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.267 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.267 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.525 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.525 { 00:19:43.525 "cntlid": 119, 00:19:43.525 "qid": 0, 00:19:43.525 "state": "enabled", 00:19:43.525 "listen_address": { 00:19:43.525 "trtype": "RDMA", 00:19:43.525 "adrfam": "IPv4", 00:19:43.525 "traddr": "192.168.100.8", 00:19:43.525 "trsvcid": "4420" 00:19:43.525 }, 00:19:43.525 "peer_address": { 00:19:43.525 "trtype": "RDMA", 00:19:43.525 "adrfam": "IPv4", 00:19:43.525 "traddr": "192.168.100.8", 00:19:43.525 "trsvcid": "48245" 00:19:43.525 }, 00:19:43.525 "auth": { 00:19:43.525 "state": "completed", 00:19:43.525 "digest": "sha512", 00:19:43.525 "dhgroup": "ffdhe3072" 00:19:43.525 } 00:19:43.525 } 00:19:43.525 ]' 00:19:43.525 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.782 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.782 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.782 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.782 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.782 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.782 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.782 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.041 10:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:44.608 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.867 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.125 00:19:45.125 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.125 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.125 10:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.383 { 00:19:45.383 "cntlid": 121, 00:19:45.383 "qid": 0, 00:19:45.383 "state": "enabled", 00:19:45.383 "listen_address": { 00:19:45.383 "trtype": "RDMA", 00:19:45.383 "adrfam": "IPv4", 00:19:45.383 "traddr": "192.168.100.8", 00:19:45.383 "trsvcid": "4420" 00:19:45.383 }, 00:19:45.383 "peer_address": { 00:19:45.383 "trtype": "RDMA", 00:19:45.383 "adrfam": "IPv4", 00:19:45.383 "traddr": "192.168.100.8", 00:19:45.383 "trsvcid": "56027" 00:19:45.383 }, 00:19:45.383 "auth": { 00:19:45.383 "state": "completed", 00:19:45.383 "digest": "sha512", 00:19:45.383 "dhgroup": "ffdhe4096" 00:19:45.383 } 00:19:45.383 } 00:19:45.383 ]' 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.383 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.640 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:46.206 10:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.206 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:46.206 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.207 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.207 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.207 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.207 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.207 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.465 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.723 00:19:46.723 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.723 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.723 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.981 { 00:19:46.981 "cntlid": 123, 00:19:46.981 "qid": 0, 00:19:46.981 "state": "enabled", 00:19:46.981 "listen_address": { 00:19:46.981 "trtype": "RDMA", 00:19:46.981 "adrfam": "IPv4", 00:19:46.981 "traddr": "192.168.100.8", 00:19:46.981 "trsvcid": "4420" 00:19:46.981 }, 00:19:46.981 "peer_address": { 00:19:46.981 "trtype": "RDMA", 00:19:46.981 "adrfam": "IPv4", 00:19:46.981 "traddr": "192.168.100.8", 00:19:46.981 "trsvcid": "34856" 00:19:46.981 }, 00:19:46.981 "auth": { 00:19:46.981 "state": "completed", 00:19:46.981 "digest": "sha512", 00:19:46.981 "dhgroup": "ffdhe4096" 00:19:46.981 } 00:19:46.981 } 00:19:46.981 ]' 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.981 10:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.239 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:47.805 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.805 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:47.805 10:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.805 10:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.805 10:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.805 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.805 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.805 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.096 10:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.354 00:19:48.354 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.354 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.354 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.612 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.612 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.612 10:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.612 10:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.613 { 00:19:48.613 "cntlid": 125, 00:19:48.613 "qid": 0, 00:19:48.613 "state": "enabled", 00:19:48.613 "listen_address": { 00:19:48.613 "trtype": "RDMA", 00:19:48.613 "adrfam": "IPv4", 00:19:48.613 "traddr": "192.168.100.8", 00:19:48.613 "trsvcid": "4420" 00:19:48.613 }, 00:19:48.613 "peer_address": { 00:19:48.613 "trtype": "RDMA", 00:19:48.613 "adrfam": "IPv4", 00:19:48.613 "traddr": "192.168.100.8", 00:19:48.613 "trsvcid": "40224" 00:19:48.613 }, 00:19:48.613 "auth": { 00:19:48.613 "state": "completed", 00:19:48.613 "digest": "sha512", 00:19:48.613 "dhgroup": "ffdhe4096" 00:19:48.613 } 00:19:48.613 } 00:19:48.613 ]' 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.613 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.871 10:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:49.436 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.694 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.952 00:19:49.952 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.953 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.953 10:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.211 { 00:19:50.211 "cntlid": 127, 00:19:50.211 "qid": 0, 00:19:50.211 "state": "enabled", 00:19:50.211 "listen_address": { 00:19:50.211 "trtype": "RDMA", 00:19:50.211 "adrfam": "IPv4", 00:19:50.211 "traddr": "192.168.100.8", 00:19:50.211 "trsvcid": "4420" 00:19:50.211 }, 00:19:50.211 "peer_address": { 00:19:50.211 "trtype": "RDMA", 00:19:50.211 "adrfam": "IPv4", 00:19:50.211 "traddr": "192.168.100.8", 00:19:50.211 "trsvcid": "60269" 00:19:50.211 }, 00:19:50.211 "auth": { 00:19:50.211 "state": "completed", 00:19:50.211 "digest": "sha512", 00:19:50.211 "dhgroup": "ffdhe4096" 00:19:50.211 } 00:19:50.211 } 00:19:50.211 ]' 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.211 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.469 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:51.036 10:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.294 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.295 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.295 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.861 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.861 { 00:19:51.861 "cntlid": 129, 00:19:51.861 "qid": 0, 00:19:51.861 "state": "enabled", 00:19:51.861 "listen_address": { 00:19:51.861 "trtype": "RDMA", 00:19:51.861 "adrfam": "IPv4", 00:19:51.861 "traddr": "192.168.100.8", 00:19:51.861 "trsvcid": "4420" 00:19:51.861 }, 00:19:51.861 "peer_address": { 00:19:51.861 "trtype": "RDMA", 00:19:51.861 "adrfam": "IPv4", 00:19:51.861 "traddr": "192.168.100.8", 00:19:51.861 "trsvcid": "42427" 00:19:51.861 }, 00:19:51.861 "auth": { 00:19:51.861 "state": "completed", 00:19:51.861 "digest": "sha512", 00:19:51.861 "dhgroup": "ffdhe6144" 00:19:51.861 } 00:19:51.861 } 00:19:51.861 ]' 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.861 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.119 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.119 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.119 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.119 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.119 10:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.119 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:52.686 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.944 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:52.944 10:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.944 10:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.944 10:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.944 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.944 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.944 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.203 10:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.203 10:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.203 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.203 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.462 00:19:53.462 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.462 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.462 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.720 { 00:19:53.720 "cntlid": 131, 00:19:53.720 "qid": 0, 00:19:53.720 "state": "enabled", 00:19:53.720 "listen_address": { 00:19:53.720 "trtype": "RDMA", 00:19:53.720 "adrfam": "IPv4", 00:19:53.720 "traddr": "192.168.100.8", 00:19:53.720 "trsvcid": "4420" 00:19:53.720 }, 00:19:53.720 "peer_address": { 00:19:53.720 "trtype": "RDMA", 00:19:53.720 "adrfam": "IPv4", 00:19:53.720 "traddr": "192.168.100.8", 00:19:53.720 "trsvcid": "33911" 00:19:53.720 }, 00:19:53.720 "auth": { 00:19:53.720 "state": "completed", 00:19:53.720 "digest": "sha512", 00:19:53.720 "dhgroup": "ffdhe6144" 00:19:53.720 } 00:19:53.720 } 00:19:53.720 ]' 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.720 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.979 10:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:19:54.547 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.547 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:54.547 10:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.547 10:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.547 10:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.547 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.547 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.547 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.806 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:54.806 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.806 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.806 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.806 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.806 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.806 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.806 10:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.807 10:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.807 10:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.807 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.807 10:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.065 00:19:55.065 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.065 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.065 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.324 { 00:19:55.324 "cntlid": 133, 00:19:55.324 "qid": 0, 00:19:55.324 "state": "enabled", 00:19:55.324 "listen_address": { 00:19:55.324 "trtype": "RDMA", 00:19:55.324 "adrfam": "IPv4", 00:19:55.324 "traddr": "192.168.100.8", 00:19:55.324 "trsvcid": "4420" 00:19:55.324 }, 00:19:55.324 "peer_address": { 00:19:55.324 "trtype": "RDMA", 00:19:55.324 "adrfam": "IPv4", 00:19:55.324 "traddr": "192.168.100.8", 00:19:55.324 "trsvcid": "49886" 00:19:55.324 }, 00:19:55.324 "auth": { 00:19:55.324 "state": "completed", 00:19:55.324 "digest": "sha512", 00:19:55.324 "dhgroup": "ffdhe6144" 00:19:55.324 } 00:19:55.324 } 00:19:55.324 ]' 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.324 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.583 10:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:19:56.151 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.409 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.668 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.668 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.668 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.927 00:19:56.927 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.927 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.927 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.186 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.186 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.186 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.186 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.186 10:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.186 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.186 { 00:19:57.186 "cntlid": 135, 00:19:57.186 "qid": 0, 00:19:57.186 "state": "enabled", 00:19:57.186 "listen_address": { 00:19:57.186 "trtype": "RDMA", 00:19:57.186 "adrfam": "IPv4", 00:19:57.186 "traddr": "192.168.100.8", 00:19:57.186 "trsvcid": "4420" 00:19:57.186 }, 00:19:57.186 "peer_address": { 00:19:57.186 "trtype": "RDMA", 00:19:57.186 "adrfam": "IPv4", 00:19:57.186 "traddr": "192.168.100.8", 00:19:57.186 "trsvcid": "38343" 00:19:57.186 }, 00:19:57.186 "auth": { 00:19:57.186 "state": "completed", 00:19:57.186 "digest": "sha512", 00:19:57.186 "dhgroup": "ffdhe6144" 00:19:57.186 } 00:19:57.186 } 00:19:57.186 ]' 00:19:57.186 10:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.186 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.186 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.186 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.186 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.186 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.186 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.187 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.445 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:19:58.013 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.013 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:58.013 10:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.014 10:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.014 10:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.014 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.014 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.014 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.014 10:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.273 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.840 00:19:58.840 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.840 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.840 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.840 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.840 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.841 10:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.841 10:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.841 10:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.841 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.841 { 00:19:58.841 "cntlid": 137, 00:19:58.841 "qid": 0, 00:19:58.841 "state": "enabled", 00:19:58.841 "listen_address": { 00:19:58.841 "trtype": "RDMA", 00:19:58.841 "adrfam": "IPv4", 00:19:58.841 "traddr": "192.168.100.8", 00:19:58.841 "trsvcid": "4420" 00:19:58.841 }, 00:19:58.841 "peer_address": { 00:19:58.841 "trtype": "RDMA", 00:19:58.841 "adrfam": "IPv4", 00:19:58.841 "traddr": "192.168.100.8", 00:19:58.841 "trsvcid": "47216" 00:19:58.841 }, 00:19:58.841 "auth": { 00:19:58.841 "state": "completed", 00:19:58.841 "digest": "sha512", 00:19:58.841 "dhgroup": "ffdhe8192" 00:19:58.841 } 00:19:58.841 } 00:19:58.841 ]' 00:19:58.841 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.841 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.841 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.099 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.099 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.099 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.099 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.099 10:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.358 10:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:19:59.926 10:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.926 10:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:59.926 10:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.926 10:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.926 10:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.926 10:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.926 10:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.926 10:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.185 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.753 00:20:00.753 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.753 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.753 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.753 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.754 { 00:20:00.754 "cntlid": 139, 00:20:00.754 "qid": 0, 00:20:00.754 "state": "enabled", 00:20:00.754 "listen_address": { 00:20:00.754 "trtype": "RDMA", 00:20:00.754 "adrfam": "IPv4", 00:20:00.754 "traddr": "192.168.100.8", 00:20:00.754 "trsvcid": "4420" 00:20:00.754 }, 00:20:00.754 "peer_address": { 00:20:00.754 "trtype": "RDMA", 00:20:00.754 "adrfam": "IPv4", 00:20:00.754 "traddr": "192.168.100.8", 00:20:00.754 "trsvcid": "42005" 00:20:00.754 }, 00:20:00.754 "auth": { 00:20:00.754 "state": "completed", 00:20:00.754 "digest": "sha512", 00:20:00.754 "dhgroup": "ffdhe8192" 00:20:00.754 } 00:20:00.754 } 00:20:00.754 ]' 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.754 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.013 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.013 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.013 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.013 10:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzQxYmVhNTQyYjE2MWRiNDYyNWVmYWFjMGIyMDI1MTlQ2p00: --dhchap-ctrl-secret DHHC-1:02:YWNmZTg5NTRmYzU3YWYxMmQzNDRiNmY0MjNmNjhjYWFmMzUyZDBhZTJlNThiNDZmAWRq+Q==: 00:20:01.580 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.839 10:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.408 00:20:02.408 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.408 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.408 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.668 { 00:20:02.668 "cntlid": 141, 00:20:02.668 "qid": 0, 00:20:02.668 "state": "enabled", 00:20:02.668 "listen_address": { 00:20:02.668 "trtype": "RDMA", 00:20:02.668 "adrfam": "IPv4", 00:20:02.668 "traddr": "192.168.100.8", 00:20:02.668 "trsvcid": "4420" 00:20:02.668 }, 00:20:02.668 "peer_address": { 00:20:02.668 "trtype": "RDMA", 00:20:02.668 "adrfam": "IPv4", 00:20:02.668 "traddr": "192.168.100.8", 00:20:02.668 "trsvcid": "40429" 00:20:02.668 }, 00:20:02.668 "auth": { 00:20:02.668 "state": "completed", 00:20:02.668 "digest": "sha512", 00:20:02.668 "dhgroup": "ffdhe8192" 00:20:02.668 } 00:20:02.668 } 00:20:02.668 ]' 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.668 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.927 10:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODhmMzFhNDdjMDRlNWNiYThmNGQxYTgyNmRmMjk1YTNjOWY3M2M5YjQ5OTY3NmY014rjsg==: --dhchap-ctrl-secret DHHC-1:01:NjE5NDkyMGY1NjdiNmUzYzYzMzRkYjczOGU4MGFmYzDAUF8/: 00:20:03.494 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.494 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:03.494 10:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.753 10:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.320 00:20:04.320 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.320 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.320 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.579 { 00:20:04.579 "cntlid": 143, 00:20:04.579 "qid": 0, 00:20:04.579 "state": "enabled", 00:20:04.579 "listen_address": { 00:20:04.579 "trtype": "RDMA", 00:20:04.579 "adrfam": "IPv4", 00:20:04.579 "traddr": "192.168.100.8", 00:20:04.579 "trsvcid": "4420" 00:20:04.579 }, 00:20:04.579 "peer_address": { 00:20:04.579 "trtype": "RDMA", 00:20:04.579 "adrfam": "IPv4", 00:20:04.579 "traddr": "192.168.100.8", 00:20:04.579 "trsvcid": "37906" 00:20:04.579 }, 00:20:04.579 "auth": { 00:20:04.579 "state": "completed", 00:20:04.579 "digest": "sha512", 00:20:04.579 "dhgroup": "ffdhe8192" 00:20:04.579 } 00:20:04.579 } 00:20:04.579 ]' 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.579 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.867 10:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:05.464 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.722 10:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.289 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.289 { 00:20:06.289 "cntlid": 145, 00:20:06.289 "qid": 0, 00:20:06.289 "state": "enabled", 00:20:06.289 "listen_address": { 00:20:06.289 "trtype": "RDMA", 00:20:06.289 "adrfam": "IPv4", 00:20:06.289 "traddr": "192.168.100.8", 00:20:06.289 "trsvcid": "4420" 00:20:06.289 }, 00:20:06.289 "peer_address": { 00:20:06.289 "trtype": "RDMA", 00:20:06.289 "adrfam": "IPv4", 00:20:06.289 "traddr": "192.168.100.8", 00:20:06.289 "trsvcid": "35157" 00:20:06.289 }, 00:20:06.289 "auth": { 00:20:06.289 "state": "completed", 00:20:06.289 "digest": "sha512", 00:20:06.289 "dhgroup": "ffdhe8192" 00:20:06.289 } 00:20:06.289 } 00:20:06.289 ]' 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.289 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.547 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.547 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.547 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.548 10:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWIxZWQ3ZTE1OTgwZmQ4NjUyNWJjODEzODcwYTkzOGYxMzA5NmE1YzNhZTgzMjFk4nGNsQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk0OGM3NDhmMzBjNjY2NjAzNTJlMmEyODdkY2I4OTFjNWZjYjgzNDQwODE3NzIwYjhmMWNkODljNjdiYzc4M+eaDm4=: 00:20:07.115 10:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.373 10:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:07.373 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.373 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.373 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.373 10:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:07.374 10:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:39.451 request: 00:20:39.451 { 00:20:39.451 "name": "nvme0", 00:20:39.451 "trtype": "rdma", 00:20:39.451 "traddr": "192.168.100.8", 00:20:39.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:39.451 "adrfam": "ipv4", 00:20:39.451 "trsvcid": "4420", 00:20:39.451 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:39.451 "dhchap_key": "key2", 00:20:39.451 "method": "bdev_nvme_attach_controller", 00:20:39.451 "req_id": 1 00:20:39.451 } 00:20:39.451 Got JSON-RPC error response 00:20:39.451 response: 00:20:39.451 { 00:20:39.451 "code": -5, 00:20:39.451 "message": "Input/output error" 00:20:39.451 } 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:39.451 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:39.452 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:39.452 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:39.452 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:39.452 10:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:39.452 10:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:39.452 request: 00:20:39.452 { 00:20:39.452 "name": "nvme0", 00:20:39.452 "trtype": "rdma", 00:20:39.452 "traddr": "192.168.100.8", 00:20:39.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:39.452 "adrfam": "ipv4", 00:20:39.452 "trsvcid": "4420", 00:20:39.452 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:39.452 "dhchap_key": "key1", 00:20:39.452 "dhchap_ctrlr_key": "ckey2", 00:20:39.452 "method": "bdev_nvme_attach_controller", 00:20:39.452 "req_id": 1 00:20:39.452 } 00:20:39.452 Got JSON-RPC error response 00:20:39.452 response: 00:20:39.452 { 00:20:39.452 "code": -5, 00:20:39.452 "message": "Input/output error" 00:20:39.452 } 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.452 10:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.535 request: 00:21:11.535 { 00:21:11.535 "name": "nvme0", 00:21:11.535 "trtype": "rdma", 00:21:11.535 "traddr": "192.168.100.8", 00:21:11.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:11.535 "adrfam": "ipv4", 00:21:11.535 "trsvcid": "4420", 00:21:11.535 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:11.535 "dhchap_key": "key1", 00:21:11.535 "dhchap_ctrlr_key": "ckey1", 00:21:11.535 "method": "bdev_nvme_attach_controller", 00:21:11.535 "req_id": 1 00:21:11.535 } 00:21:11.535 Got JSON-RPC error response 00:21:11.535 response: 00:21:11.535 { 00:21:11.535 "code": -5, 00:21:11.535 "message": "Input/output error" 00:21:11.535 } 00:21:11.535 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:11.535 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:11.535 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:11.535 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:11.535 10:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:11.535 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.535 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4086997 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 4086997 ']' 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 4086997 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4086997 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4086997' 00:21:11.536 killing process with pid 4086997 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 4086997 00:21:11.536 10:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 4086997 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4119076 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4119076 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 4119076 ']' 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4119076 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 4119076 ']' 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:11.536 10:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.536 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.536 { 00:21:11.536 "cntlid": 1, 00:21:11.536 "qid": 0, 00:21:11.536 "state": "enabled", 00:21:11.536 "listen_address": { 00:21:11.536 "trtype": "RDMA", 00:21:11.536 "adrfam": "IPv4", 00:21:11.536 "traddr": "192.168.100.8", 00:21:11.536 "trsvcid": "4420" 00:21:11.536 }, 00:21:11.536 "peer_address": { 00:21:11.536 "trtype": "RDMA", 00:21:11.536 "adrfam": "IPv4", 00:21:11.536 "traddr": "192.168.100.8", 00:21:11.536 "trsvcid": "48901" 00:21:11.536 }, 00:21:11.536 "auth": { 00:21:11.536 "state": "completed", 00:21:11.536 "digest": "sha512", 00:21:11.536 "dhgroup": "ffdhe8192" 00:21:11.536 } 00:21:11.536 } 00:21:11.536 ]' 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.536 10:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.536 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.536 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.536 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.536 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.536 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.536 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NGRiN2JkMTRjMjIyMDQ5NTlhOTE1ZWUyNzIzNTI3NzIxZjY0ZmJjZTU5OTMzNTQ0ZGJlYmI1OThhNDgxMGE5NLqRD9c=: 00:21:11.797 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:12.055 10:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.313 10:48:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.416 request: 00:21:44.416 { 00:21:44.416 "name": "nvme0", 00:21:44.416 "trtype": "rdma", 00:21:44.416 "traddr": "192.168.100.8", 00:21:44.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:44.416 "adrfam": "ipv4", 00:21:44.416 "trsvcid": "4420", 00:21:44.416 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.416 "dhchap_key": "key3", 00:21:44.416 "method": "bdev_nvme_attach_controller", 00:21:44.416 "req_id": 1 00:21:44.416 } 00:21:44.416 Got JSON-RPC error response 00:21:44.416 response: 00:21:44.416 { 00:21:44.416 "code": -5, 00:21:44.416 "message": "Input/output error" 00:21:44.416 } 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.416 10:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.496 request: 00:22:16.496 { 00:22:16.496 "name": "nvme0", 00:22:16.496 "trtype": "rdma", 00:22:16.496 "traddr": "192.168.100.8", 00:22:16.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:16.496 "adrfam": "ipv4", 00:22:16.496 "trsvcid": "4420", 00:22:16.496 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.496 "dhchap_key": "key3", 00:22:16.496 "method": "bdev_nvme_attach_controller", 00:22:16.496 "req_id": 1 00:22:16.496 } 00:22:16.496 Got JSON-RPC error response 00:22:16.496 response: 00:22:16.496 { 00:22:16.496 "code": -5, 00:22:16.496 "message": "Input/output error" 00:22:16.496 } 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.496 10:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:22:16.496 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:16.497 request: 00:22:16.497 { 00:22:16.497 "name": "nvme0", 00:22:16.497 "trtype": "rdma", 00:22:16.497 "traddr": "192.168.100.8", 00:22:16.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:22:16.497 "adrfam": "ipv4", 00:22:16.497 "trsvcid": "4420", 00:22:16.497 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.497 "dhchap_key": "key0", 00:22:16.497 "dhchap_ctrlr_key": "key1", 00:22:16.497 "method": "bdev_nvme_attach_controller", 00:22:16.497 "req_id": 1 00:22:16.497 } 00:22:16.497 Got JSON-RPC error response 00:22:16.497 response: 00:22:16.497 { 00:22:16.497 "code": -5, 00:22:16.497 "message": "Input/output error" 00:22:16.497 } 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:16.497 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4087239 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 4087239 ']' 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 4087239 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4087239 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4087239' 00:22:16.497 killing process with pid 4087239 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 4087239 00:22:16.497 10:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 4087239 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:16.497 rmmod nvme_rdma 00:22:16.497 rmmod nvme_fabrics 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4119076 ']' 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4119076 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 4119076 ']' 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 4119076 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4119076 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4119076' 00:22:16.497 killing process with pid 4119076 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 4119076 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 4119076 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8K6 /tmp/spdk.key-sha256.HjT /tmp/spdk.key-sha384.DwQ /tmp/spdk.key-sha512.tkp /tmp/spdk.key-sha512.wJF /tmp/spdk.key-sha384.8En /tmp/spdk.key-sha256.6us '' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf-auth.log 00:22:16.497 00:22:16.497 real 4m18.830s 00:22:16.497 user 9m19.236s 00:22:16.497 sys 0m19.717s 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:16.497 10:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.497 ************************************ 00:22:16.497 END TEST nvmf_auth_target 00:22:16.497 ************************************ 00:22:16.497 10:49:43 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:22:16.497 10:49:43 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:22:16.497 10:49:43 nvmf_rdma -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:16.497 10:49:43 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:16.497 10:49:43 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:16.497 10:49:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:16.497 ************************************ 00:22:16.497 START TEST nvmf_fuzz 00:22:16.497 ************************************ 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:16.497 * Looking for test storage... 00:22:16.497 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.497 10:49:43 nvmf_rdma.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.498 10:49:43 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.690 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:20.691 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:20.691 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@377 -- # modinfo irdma 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:20.691 Found net devices under 0000:af:00.0: cvl_0_0 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:20.691 Found net devices under 0000:af:00.1: cvl_0_1 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo cvl_0_0 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo cvl_0_1 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:22:20.691 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:20.691 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:22:20.691 altname enp175s0f0np0 00:22:20.691 altname ens801f0np0 00:22:20.691 inet 192.168.100.8/24 scope global cvl_0_0 00:22:20.691 valid_lft forever preferred_lft forever 00:22:20.691 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:22:20.691 valid_lft forever preferred_lft forever 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:20.691 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:22:20.692 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:20.692 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:22:20.692 altname enp175s0f1np1 00:22:20.692 altname ens801f1np1 00:22:20.692 inet 192.168.100.9/24 scope global cvl_0_1 00:22:20.692 valid_lft forever preferred_lft forever 00:22:20.692 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:22:20.692 valid_lft forever preferred_lft forever 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo cvl_0_0 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo cvl_0_1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:20.692 192.168.100.9' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:20.692 192.168.100.9' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:20.692 192.168.100.9' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4133030 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4133030 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@830 -- # '[' -z 4133030 ']' 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:20.692 10:49:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@863 -- # return 0 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:21.260 Malloc0 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:22:21.260 10:49:50 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:22:53.336 Fuzzing completed. Shutting down the fuzz application 00:22:53.336 00:22:53.336 Dumping successful admin opcodes: 00:22:53.336 8, 9, 10, 24, 00:22:53.336 Dumping successful io opcodes: 00:22:53.336 0, 9, 00:22:53.336 NS: 0x200003af1f00 I/O qp, Total commands completed: 1212398, total successful commands: 7122, random_seed: 543535552 00:22:53.336 NS: 0x200003af1f00 admin qp, Total commands completed: 152896, total successful commands: 1235, random_seed: 2178767296 00:22:53.336 10:50:21 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:53.901 Fuzzing completed. Shutting down the fuzz application 00:22:53.901 00:22:53.901 Dumping successful admin opcodes: 00:22:53.901 24, 00:22:53.901 Dumping successful io opcodes: 00:22:53.901 00:22:53.901 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 482265568 00:22:53.901 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 482327834 00:22:53.901 10:50:22 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.901 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.901 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:53.901 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.902 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:53.902 rmmod nvme_rdma 00:22:53.902 rmmod nvme_fabrics 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 4133030 ']' 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 4133030 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@949 -- # '[' -z 4133030 ']' 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@953 -- # kill -0 4133030 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@954 -- # uname 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4133030 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4133030' 00:22:54.160 killing process with pid 4133030 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@968 -- # kill 4133030 00:22:54.160 10:50:22 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@973 -- # wait 4133030 00:22:54.420 10:50:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.420 10:50:23 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:54.420 10:50:23 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:54.420 00:22:54.420 real 0m39.651s 00:22:54.420 user 0m53.722s 00:22:54.420 sys 0m17.873s 00:22:54.420 10:50:23 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:54.420 10:50:23 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:54.420 ************************************ 00:22:54.420 END TEST nvmf_fuzz 00:22:54.420 ************************************ 00:22:54.420 10:50:23 nvmf_rdma -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:54.420 10:50:23 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:54.420 10:50:23 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:54.420 10:50:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:54.420 ************************************ 00:22:54.420 START TEST nvmf_multiconnection 00:22:54.420 ************************************ 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:54.420 * Looking for test storage... 00:22:54.420 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.420 10:50:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:59.694 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:59.694 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@377 -- # modinfo irdma 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:59.694 Found net devices under 0000:af:00.0: cvl_0_0 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.694 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:59.695 Found net devices under 0000:af:00.1: cvl_0_1 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:59.695 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:59.954 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:59.954 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:59.954 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:59.954 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:59.954 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo cvl_0_0 00:22:59.954 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo cvl_0_1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:22:59.955 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:59.955 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:22:59.955 altname enp175s0f0np0 00:22:59.955 altname ens801f0np0 00:22:59.955 inet 192.168.100.8/24 scope global cvl_0_0 00:22:59.955 valid_lft forever preferred_lft forever 00:22:59.955 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:22:59.955 valid_lft forever preferred_lft forever 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:22:59.955 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:59.955 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:22:59.955 altname enp175s0f1np1 00:22:59.955 altname ens801f1np1 00:22:59.955 inet 192.168.100.9/24 scope global cvl_0_1 00:22:59.955 valid_lft forever preferred_lft forever 00:22:59.955 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:22:59.955 valid_lft forever preferred_lft forever 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo cvl_0_0 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo cvl_0_1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:59.955 192.168.100.9' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:59.955 192.168.100.9' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:59.955 192.168.100.9' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=4141758 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 4141758 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@830 -- # '[' -z 4141758 ']' 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:59.955 10:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:59.955 [2024-06-10 10:50:28.922908] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:59.955 [2024-06-10 10:50:28.922969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.955 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.245 [2024-06-10 10:50:28.990494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.245 [2024-06-10 10:50:29.070999] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.245 [2024-06-10 10:50:29.071038] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.245 [2024-06-10 10:50:29.071046] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.245 [2024-06-10 10:50:29.071052] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.245 [2024-06-10 10:50:29.071057] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.245 [2024-06-10 10:50:29.071105] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.245 [2024-06-10 10:50:29.071125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.245 [2024-06-10 10:50:29.071193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.245 [2024-06-10 10:50:29.071194] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@863 -- # return 0 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:00.812 [2024-06-10 10:50:29.785450] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x7268f0/0x725f30) succeed. 00:23:00.812 [2024-06-10 10:50:29.794257] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x727ca0/0x7264b0) succeed. 00:23:00.812 [2024-06-10 10:50:29.794279] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:00.812 Malloc1 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.812 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 [2024-06-10 10:50:29.853443] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 Malloc2 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 Malloc3 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 Malloc4 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 Malloc5 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 Malloc6 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.072 Malloc7 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.072 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 Malloc8 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 Malloc9 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.332 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 Malloc10 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 Malloc11 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.333 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:01.592 10:50:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:01.592 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:01.592 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:01.592 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:01.592 10:50:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK1 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:04.123 10:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK2 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:06.025 10:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK3 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:08.559 10:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK4 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:10.463 10:50:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK5 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:13.015 10:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK6 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:14.923 10:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:17.454 10:50:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:17.454 10:50:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:17.454 10:50:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK7 00:23:17.454 10:50:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:17.454 10:50:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:17.454 10:50:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:17.454 10:50:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:17.454 10:50:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:23:17.454 10:50:46 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:17.454 10:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:17.454 10:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:17.454 10:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:17.454 10:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:19.357 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:19.357 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:19.357 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK8 00:23:19.357 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:19.357 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:19.357 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:19.357 10:50:48 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:19.357 10:50:48 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:23:19.616 10:50:48 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:19.616 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:19.616 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:19.616 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:19.616 10:50:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:21.519 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:21.519 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:21.519 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK9 00:23:21.519 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:21.519 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:21.519 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:21.519 10:50:50 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:21.519 10:50:50 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:23:21.777 10:50:50 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:21.777 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:21.777 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.777 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:21.777 10:50:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:23.680 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:23.680 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:23.680 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK10 00:23:23.680 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:23.680 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:23.680 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:23.680 10:50:52 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.680 10:50:52 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:23:23.938 10:50:52 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:23.938 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:23:23.938 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:23.938 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:23.938 10:50:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:23:25.905 10:50:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:25.905 10:50:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:25.905 10:50:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK11 00:23:26.164 10:50:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:26.164 10:50:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:26.164 10:50:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:23:26.164 10:50:54 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:26.164 [global] 00:23:26.164 thread=1 00:23:26.164 invalidate=1 00:23:26.164 rw=read 00:23:26.164 time_based=1 00:23:26.164 runtime=10 00:23:26.164 ioengine=libaio 00:23:26.164 direct=1 00:23:26.164 bs=262144 00:23:26.164 iodepth=64 00:23:26.164 norandommap=1 00:23:26.164 numjobs=1 00:23:26.164 00:23:26.164 [job0] 00:23:26.164 filename=/dev/nvme0n1 00:23:26.164 [job1] 00:23:26.164 filename=/dev/nvme10n1 00:23:26.164 [job2] 00:23:26.164 filename=/dev/nvme11n1 00:23:26.164 [job3] 00:23:26.164 filename=/dev/nvme2n1 00:23:26.164 [job4] 00:23:26.164 filename=/dev/nvme3n1 00:23:26.164 [job5] 00:23:26.164 filename=/dev/nvme4n1 00:23:26.164 [job6] 00:23:26.164 filename=/dev/nvme5n1 00:23:26.164 [job7] 00:23:26.164 filename=/dev/nvme6n1 00:23:26.164 [job8] 00:23:26.164 filename=/dev/nvme7n1 00:23:26.164 [job9] 00:23:26.164 filename=/dev/nvme8n1 00:23:26.164 [job10] 00:23:26.164 filename=/dev/nvme9n1 00:23:26.164 Could not set queue depth (nvme0n1) 00:23:26.164 Could not set queue depth (nvme10n1) 00:23:26.164 Could not set queue depth (nvme11n1) 00:23:26.164 Could not set queue depth (nvme2n1) 00:23:26.164 Could not set queue depth (nvme3n1) 00:23:26.164 Could not set queue depth (nvme4n1) 00:23:26.164 Could not set queue depth (nvme5n1) 00:23:26.164 Could not set queue depth (nvme6n1) 00:23:26.164 Could not set queue depth (nvme7n1) 00:23:26.164 Could not set queue depth (nvme8n1) 00:23:26.164 Could not set queue depth (nvme9n1) 00:23:26.422 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:26.422 fio-3.35 00:23:26.422 Starting 11 threads 00:23:38.633 00:23:38.633 job0: (groupid=0, jobs=1): err= 0: pid=4146419: Mon Jun 10 10:51:05 2024 00:23:38.633 read: IOPS=1006, BW=252MiB/s (264MB/s)(2525MiB/10038msec) 00:23:38.633 slat (usec): min=12, max=14601, avg=987.27, stdev=2201.51 00:23:38.633 clat (usec): min=11324, max=94849, avg=62558.57, stdev=11595.73 00:23:38.633 lat (msec): min=11, max=107, avg=63.55, stdev=11.90 00:23:38.633 clat percentiles (usec): 00:23:38.633 | 1.00th=[35914], 5.00th=[39060], 10.00th=[48497], 20.00th=[51119], 00:23:38.633 | 30.00th=[59507], 40.00th=[61604], 50.00th=[62653], 60.00th=[64226], 00:23:38.633 | 70.00th=[68682], 80.00th=[74974], 90.00th=[76022], 95.00th=[78119], 00:23:38.633 | 99.00th=[86508], 99.50th=[88605], 99.90th=[93848], 99.95th=[94897], 00:23:38.633 | 99.99th=[94897] 00:23:38.633 bw ( KiB/s): min=206848, max=397312, per=5.18%, avg=256951.20, stdev=46371.78, samples=20 00:23:38.633 iops : min= 808, max= 1552, avg=1003.65, stdev=181.10, samples=20 00:23:38.633 lat (msec) : 20=0.22%, 50=13.84%, 100=85.94% 00:23:38.633 cpu : usr=0.24%, sys=3.97%, ctx=2257, majf=0, minf=4097 00:23:38.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:38.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.633 issued rwts: total=10100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.633 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.633 job1: (groupid=0, jobs=1): err= 0: pid=4146427: Mon Jun 10 10:51:05 2024 00:23:38.633 read: IOPS=2304, BW=576MiB/s (604MB/s)(5774MiB/10023msec) 00:23:38.633 slat (usec): min=8, max=9953, avg=430.56, stdev=924.71 00:23:38.633 clat (usec): min=8679, max=72522, avg=27320.25, stdev=6990.22 00:23:38.634 lat (usec): min=8869, max=72568, avg=27750.81, stdev=7123.29 00:23:38.634 clat percentiles (usec): 00:23:38.634 | 1.00th=[21365], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:23:38.634 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:23:38.634 | 70.00th=[25822], 80.00th=[26608], 90.00th=[33817], 95.00th=[38011], 00:23:38.634 | 99.00th=[62129], 99.50th=[62653], 99.90th=[65799], 99.95th=[66847], 00:23:38.634 | 99.99th=[71828] 00:23:38.634 bw ( KiB/s): min=262656, max=655872, per=11.88%, avg=589542.70, stdev=109027.42, samples=20 00:23:38.634 iops : min= 1026, max= 2562, avg=2302.90, stdev=425.89, samples=20 00:23:38.634 lat (msec) : 10=0.07%, 20=0.32%, 50=95.95%, 100=3.66% 00:23:38.634 cpu : usr=0.38%, sys=5.48%, ctx=5453, majf=0, minf=4097 00:23:38.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.634 issued rwts: total=23094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.634 job2: (groupid=0, jobs=1): err= 0: pid=4146432: Mon Jun 10 10:51:05 2024 00:23:38.634 read: IOPS=2406, BW=602MiB/s (631MB/s)(6031MiB/10024msec) 00:23:38.634 slat (usec): min=9, max=6771, avg=413.36, stdev=838.27 00:23:38.634 clat (usec): min=7186, max=55742, avg=26158.34, stdev=4783.53 00:23:38.634 lat (usec): min=7417, max=55779, avg=26571.70, stdev=4881.43 00:23:38.634 clat percentiles (usec): 00:23:38.634 | 1.00th=[12387], 5.00th=[23200], 10.00th=[24249], 20.00th=[24773], 00:23:38.634 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:23:38.634 | 70.00th=[26084], 80.00th=[26608], 90.00th=[28705], 95.00th=[36439], 00:23:38.634 | 99.00th=[41681], 99.50th=[47449], 99.90th=[52167], 99.95th=[53216], 00:23:38.634 | 99.99th=[53740] 00:23:38.634 bw ( KiB/s): min=425133, max=761344, per=12.42%, avg=615867.85, stdev=81757.20, samples=20 00:23:38.634 iops : min= 1660, max= 2974, avg=2405.70, stdev=319.45, samples=20 00:23:38.634 lat (msec) : 10=0.09%, 20=4.38%, 50=95.36%, 100=0.17% 00:23:38.634 cpu : usr=0.36%, sys=5.38%, ctx=5727, majf=0, minf=3347 00:23:38.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.634 issued rwts: total=24122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.634 job3: (groupid=0, jobs=1): err= 0: pid=4146437: Mon Jun 10 10:51:05 2024 00:23:38.634 read: IOPS=2223, BW=556MiB/s (583MB/s)(5575MiB/10030msec) 00:23:38.634 slat (usec): min=8, max=14092, avg=440.16, stdev=985.71 00:23:38.634 clat (usec): min=7592, max=79918, avg=28321.40, stdev=7864.04 00:23:38.634 lat (usec): min=7802, max=79954, avg=28761.55, stdev=8000.72 00:23:38.634 clat percentiles (usec): 00:23:38.634 | 1.00th=[21627], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:23:38.634 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:23:38.634 | 70.00th=[26346], 80.00th=[27657], 90.00th=[36963], 95.00th=[47449], 00:23:38.634 | 99.00th=[62653], 99.50th=[63701], 99.90th=[78119], 99.95th=[79168], 00:23:38.634 | 99.99th=[80217] 00:23:38.634 bw ( KiB/s): min=271360, max=646144, per=11.48%, avg=569259.95, stdev=114694.20, samples=20 00:23:38.634 iops : min= 1060, max= 2524, avg=2223.60, stdev=448.03, samples=20 00:23:38.634 lat (msec) : 10=0.08%, 20=0.61%, 50=95.19%, 100=4.13% 00:23:38.634 cpu : usr=0.41%, sys=5.20%, ctx=5462, majf=0, minf=4097 00:23:38.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.634 issued rwts: total=22299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.634 job4: (groupid=0, jobs=1): err= 0: pid=4146439: Mon Jun 10 10:51:05 2024 00:23:38.634 read: IOPS=1694, BW=424MiB/s (444MB/s)(4247MiB/10023msec) 00:23:38.634 slat (usec): min=8, max=41271, avg=578.36, stdev=2134.14 00:23:38.634 clat (msec): min=10, max=113, avg=37.15, stdev=19.03 00:23:38.634 lat (msec): min=10, max=114, avg=37.73, stdev=19.40 00:23:38.634 clat percentiles (msec): 00:23:38.634 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 25], 20.00th=[ 26], 00:23:38.634 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 28], 00:23:38.634 | 70.00th=[ 37], 80.00th=[ 42], 90.00th=[ 75], 95.00th=[ 77], 00:23:38.634 | 99.00th=[ 85], 99.50th=[ 93], 99.90th=[ 111], 99.95th=[ 112], 00:23:38.634 | 99.99th=[ 113] 00:23:38.634 bw ( KiB/s): min=205312, max=642048, per=8.73%, avg=433278.80, stdev=181199.30, samples=20 00:23:38.634 iops : min= 802, max= 2508, avg=1692.45, stdev=707.81, samples=20 00:23:38.634 lat (msec) : 20=0.62%, 50=80.79%, 100=18.28%, 250=0.31% 00:23:38.634 cpu : usr=0.26%, sys=4.14%, ctx=4217, majf=0, minf=4097 00:23:38.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.634 issued rwts: total=16988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.634 job5: (groupid=0, jobs=1): err= 0: pid=4146450: Mon Jun 10 10:51:05 2024 00:23:38.634 read: IOPS=2383, BW=596MiB/s (625MB/s)(5981MiB/10038msec) 00:23:38.634 slat (usec): min=9, max=43658, avg=413.59, stdev=1247.31 00:23:38.634 clat (msec): min=8, max=135, avg=26.42, stdev=11.32 00:23:38.634 lat (msec): min=8, max=135, avg=26.83, stdev=11.53 00:23:38.634 clat percentiles (msec): 00:23:38.634 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 24], 00:23:38.634 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 26], 00:23:38.634 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 60], 00:23:38.634 | 99.00th=[ 67], 99.50th=[ 83], 99.90th=[ 92], 99.95th=[ 92], 00:23:38.634 | 99.99th=[ 136] 00:23:38.634 bw ( KiB/s): min=253952, max=1025024, per=12.31%, avg=610737.90, stdev=185895.02, samples=20 00:23:38.634 iops : min= 992, max= 4004, avg=2385.65, stdev=726.24, samples=20 00:23:38.634 lat (msec) : 10=0.08%, 20=13.35%, 50=80.16%, 100=6.40%, 250=0.01% 00:23:38.634 cpu : usr=0.43%, sys=5.85%, ctx=5719, majf=0, minf=4097 00:23:38.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.634 issued rwts: total=23922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.634 job6: (groupid=0, jobs=1): err= 0: pid=4146458: Mon Jun 10 10:51:05 2024 00:23:38.634 read: IOPS=1005, BW=251MiB/s (264MB/s)(2524MiB/10036msec) 00:23:38.634 slat (usec): min=10, max=21406, avg=987.24, stdev=2414.93 00:23:38.634 clat (msec): min=11, max=106, avg=62.59, stdev=11.76 00:23:38.634 lat (msec): min=11, max=106, avg=63.58, stdev=12.10 00:23:38.634 clat percentiles (msec): 00:23:38.634 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 50], 20.00th=[ 52], 00:23:38.634 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:23:38.634 | 70.00th=[ 68], 80.00th=[ 75], 90.00th=[ 77], 95.00th=[ 79], 00:23:38.634 | 99.00th=[ 88], 99.50th=[ 93], 99.90th=[ 95], 99.95th=[ 96], 00:23:38.634 | 99.99th=[ 102] 00:23:38.634 bw ( KiB/s): min=203776, max=392192, per=5.18%, avg=256822.90, stdev=46058.20, samples=20 00:23:38.634 iops : min= 796, max= 1532, avg=1003.15, stdev=179.88, samples=20 00:23:38.634 lat (msec) : 20=0.16%, 50=14.13%, 100=85.70%, 250=0.02% 00:23:38.634 cpu : usr=0.40%, sys=3.92%, ctx=2160, majf=0, minf=4097 00:23:38.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.634 issued rwts: total=10095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.634 job7: (groupid=0, jobs=1): err= 0: pid=4146464: Mon Jun 10 10:51:05 2024 00:23:38.634 read: IOPS=1128, BW=282MiB/s (296MB/s)(2828MiB/10020msec) 00:23:38.634 slat (usec): min=8, max=16297, avg=877.25, stdev=2075.02 00:23:38.634 clat (msec): min=11, max=102, avg=55.77, stdev=17.78 00:23:38.634 lat (msec): min=11, max=107, avg=56.65, stdev=18.13 00:23:38.634 clat percentiles (msec): 00:23:38.634 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 35], 00:23:38.634 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 62], 60.00th=[ 63], 00:23:38.634 | 70.00th=[ 65], 80.00th=[ 74], 90.00th=[ 77], 95.00th=[ 78], 00:23:38.634 | 99.00th=[ 87], 99.50th=[ 90], 99.90th=[ 95], 99.95th=[ 100], 00:23:38.634 | 99.99th=[ 103] 00:23:38.634 bw ( KiB/s): min=207360, max=619008, per=5.80%, avg=287871.85, stdev=108592.58, samples=20 00:23:38.635 iops : min= 810, max= 2418, avg=1124.45, stdev=424.08, samples=20 00:23:38.635 lat (msec) : 20=0.15%, 50=30.35%, 100=69.47%, 250=0.03% 00:23:38.635 cpu : usr=0.29%, sys=3.94%, ctx=2573, majf=0, minf=4097 00:23:38.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:38.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.635 issued rwts: total=11310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.635 job8: (groupid=0, jobs=1): err= 0: pid=4146486: Mon Jun 10 10:51:05 2024 00:23:38.635 read: IOPS=1201, BW=300MiB/s (315MB/s)(3014MiB/10030msec) 00:23:38.635 slat (usec): min=8, max=28214, avg=813.10, stdev=2371.71 00:23:38.635 clat (usec): min=1419, max=115655, avg=52383.24, stdev=21506.30 00:23:38.635 lat (usec): min=1541, max=115686, avg=53196.34, stdev=21943.05 00:23:38.635 clat percentiles (msec): 00:23:38.635 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 36], 00:23:38.635 | 30.00th=[ 39], 40.00th=[ 52], 50.00th=[ 62], 60.00th=[ 63], 00:23:38.635 | 70.00th=[ 64], 80.00th=[ 74], 90.00th=[ 77], 95.00th=[ 78], 00:23:38.635 | 99.00th=[ 88], 99.50th=[ 92], 99.90th=[ 99], 99.95th=[ 103], 00:23:38.635 | 99.99th=[ 116] 00:23:38.635 bw ( KiB/s): min=206848, max=990720, per=6.19%, avg=306975.15, stdev=175057.80, samples=20 00:23:38.635 iops : min= 808, max= 3870, avg=1199.10, stdev=683.80, samples=20 00:23:38.635 lat (msec) : 2=0.08%, 4=0.49%, 10=1.01%, 20=12.99%, 50=21.90% 00:23:38.635 lat (msec) : 100=63.47%, 250=0.06% 00:23:38.635 cpu : usr=0.30%, sys=4.09%, ctx=3037, majf=0, minf=4097 00:23:38.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:38.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.635 issued rwts: total=12056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.635 job9: (groupid=0, jobs=1): err= 0: pid=4146496: Mon Jun 10 10:51:05 2024 00:23:38.635 read: IOPS=1142, BW=286MiB/s (300MB/s)(2868MiB/10037msec) 00:23:38.635 slat (usec): min=8, max=32946, avg=856.19, stdev=2491.68 00:23:38.635 clat (msec): min=10, max=115, avg=55.09, stdev=18.90 00:23:38.635 lat (msec): min=10, max=115, avg=55.95, stdev=19.29 00:23:38.635 clat percentiles (msec): 00:23:38.635 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 38], 00:23:38.635 | 30.00th=[ 42], 40.00th=[ 55], 50.00th=[ 62], 60.00th=[ 64], 00:23:38.635 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 77], 95.00th=[ 79], 00:23:38.635 | 99.00th=[ 88], 99.50th=[ 99], 99.90th=[ 107], 99.95th=[ 107], 00:23:38.635 | 99.99th=[ 108] 00:23:38.635 bw ( KiB/s): min=196608, max=604672, per=5.89%, avg=292058.95, stdev=109084.17, samples=20 00:23:38.635 iops : min= 768, max= 2362, avg=1140.80, stdev=426.11, samples=20 00:23:38.635 lat (msec) : 20=4.28%, 50=32.04%, 100=63.38%, 250=0.31% 00:23:38.635 cpu : usr=0.19%, sys=3.04%, ctx=2885, majf=0, minf=4097 00:23:38.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:38.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.635 issued rwts: total=11471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.635 job10: (groupid=0, jobs=1): err= 0: pid=4146503: Mon Jun 10 10:51:05 2024 00:23:38.635 read: IOPS=2896, BW=724MiB/s (759MB/s)(7262MiB/10030msec) 00:23:38.635 slat (usec): min=8, max=21789, avg=334.34, stdev=965.44 00:23:38.635 clat (usec): min=10322, max=87237, avg=21745.05, stdev=13757.06 00:23:38.635 lat (usec): min=10518, max=91971, avg=22079.39, stdev=13978.77 00:23:38.635 clat percentiles (usec): 00:23:38.635 | 1.00th=[11600], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:23:38.635 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[23987], 00:23:38.635 | 70.00th=[25035], 80.00th=[26084], 90.00th=[39060], 95.00th=[59507], 00:23:38.635 | 99.00th=[65799], 99.50th=[66847], 99.90th=[74974], 99.95th=[77071], 00:23:38.635 | 99.99th=[85459] 00:23:38.635 bw ( KiB/s): min=260096, max=1294848, per=14.96%, avg=742030.55, stdev=395894.27, samples=20 00:23:38.635 iops : min= 1016, max= 5058, avg=2898.50, stdev=1546.53, samples=20 00:23:38.635 lat (msec) : 20=55.90%, 50=36.26%, 100=7.84% 00:23:38.635 cpu : usr=0.40%, sys=5.06%, ctx=7340, majf=0, minf=4097 00:23:38.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:38.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:38.635 issued rwts: total=29048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:38.635 00:23:38.635 Run status group 0 (all jobs): 00:23:38.635 READ: bw=4844MiB/s (5080MB/s), 251MiB/s-724MiB/s (264MB/s-759MB/s), io=47.5GiB (51.0GB), run=10020-10038msec 00:23:38.635 00:23:38.635 Disk stats (read/write): 00:23:38.635 nvme0n1: ios=20074/0, merge=0/0, ticks=1230285/0, in_queue=1230285, util=97.74% 00:23:38.635 nvme10n1: ios=46034/0, merge=0/0, ticks=1224369/0, in_queue=1224369, util=97.90% 00:23:38.635 nvme11n1: ios=48100/0, merge=0/0, ticks=1223442/0, in_queue=1223442, util=98.00% 00:23:38.635 nvme2n1: ios=44466/0, merge=0/0, ticks=1223902/0, in_queue=1223902, util=98.12% 00:23:38.635 nvme3n1: ios=33836/0, merge=0/0, ticks=1227109/0, in_queue=1227109, util=98.14% 00:23:38.635 nvme4n1: ios=47711/0, merge=0/0, ticks=1223778/0, in_queue=1223778, util=98.44% 00:23:38.635 nvme5n1: ios=20041/0, merge=0/0, ticks=1229658/0, in_queue=1229658, util=98.54% 00:23:38.635 nvme6n1: ios=22483/0, merge=0/0, ticks=1232742/0, in_queue=1232742, util=98.63% 00:23:38.635 nvme7n1: ios=23975/0, merge=0/0, ticks=1231222/0, in_queue=1231222, util=98.94% 00:23:38.635 nvme8n1: ios=22807/0, merge=0/0, ticks=1226827/0, in_queue=1226827, util=99.09% 00:23:38.635 nvme9n1: ios=57959/0, merge=0/0, ticks=1225799/0, in_queue=1225799, util=99.19% 00:23:38.635 10:51:05 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:38.635 [global] 00:23:38.635 thread=1 00:23:38.635 invalidate=1 00:23:38.635 rw=randwrite 00:23:38.635 time_based=1 00:23:38.635 runtime=10 00:23:38.635 ioengine=libaio 00:23:38.635 direct=1 00:23:38.635 bs=262144 00:23:38.635 iodepth=64 00:23:38.635 norandommap=1 00:23:38.635 numjobs=1 00:23:38.635 00:23:38.635 [job0] 00:23:38.635 filename=/dev/nvme0n1 00:23:38.635 [job1] 00:23:38.635 filename=/dev/nvme10n1 00:23:38.635 [job2] 00:23:38.635 filename=/dev/nvme11n1 00:23:38.635 [job3] 00:23:38.635 filename=/dev/nvme2n1 00:23:38.635 [job4] 00:23:38.635 filename=/dev/nvme3n1 00:23:38.635 [job5] 00:23:38.635 filename=/dev/nvme4n1 00:23:38.635 [job6] 00:23:38.635 filename=/dev/nvme5n1 00:23:38.635 [job7] 00:23:38.635 filename=/dev/nvme6n1 00:23:38.635 [job8] 00:23:38.635 filename=/dev/nvme7n1 00:23:38.635 [job9] 00:23:38.635 filename=/dev/nvme8n1 00:23:38.635 [job10] 00:23:38.635 filename=/dev/nvme9n1 00:23:38.635 Could not set queue depth (nvme0n1) 00:23:38.635 Could not set queue depth (nvme10n1) 00:23:38.635 Could not set queue depth (nvme11n1) 00:23:38.635 Could not set queue depth (nvme2n1) 00:23:38.635 Could not set queue depth (nvme3n1) 00:23:38.635 Could not set queue depth (nvme4n1) 00:23:38.635 Could not set queue depth (nvme5n1) 00:23:38.635 Could not set queue depth (nvme6n1) 00:23:38.635 Could not set queue depth (nvme7n1) 00:23:38.635 Could not set queue depth (nvme8n1) 00:23:38.635 Could not set queue depth (nvme9n1) 00:23:38.635 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:38.635 fio-3.35 00:23:38.635 Starting 11 threads 00:23:48.618 00:23:48.618 job0: (groupid=0, jobs=1): err= 0: pid=4148305: Mon Jun 10 10:51:16 2024 00:23:48.618 write: IOPS=1049, BW=262MiB/s (275MB/s)(2642MiB/10070msec); 0 zone resets 00:23:48.618 slat (usec): min=26, max=45059, avg=928.47, stdev=3336.70 00:23:48.618 clat (msec): min=3, max=159, avg=60.04, stdev=22.84 00:23:48.618 lat (msec): min=3, max=159, avg=60.97, stdev=23.36 00:23:48.618 clat percentiles (msec): 00:23:48.618 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 35], 00:23:48.618 | 30.00th=[ 37], 40.00th=[ 54], 50.00th=[ 64], 60.00th=[ 68], 00:23:48.618 | 70.00th=[ 77], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 96], 00:23:48.618 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 153], 99.95th=[ 155], 00:23:48.618 | 99.99th=[ 159] 00:23:48.618 bw ( KiB/s): min=169472, max=487424, per=7.96%, avg=268928.65, stdev=105644.47, samples=20 00:23:48.618 iops : min= 662, max= 1904, avg=1050.50, stdev=412.67, samples=20 00:23:48.618 lat (msec) : 4=0.05%, 10=0.11%, 20=0.21%, 50=36.47%, 100=59.57% 00:23:48.618 lat (msec) : 250=3.59% 00:23:48.618 cpu : usr=5.15%, sys=3.25%, ctx=2190, majf=0, minf=1 00:23:48.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:48.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.618 issued rwts: total=0,10567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.618 job1: (groupid=0, jobs=1): err= 0: pid=4148326: Mon Jun 10 10:51:16 2024 00:23:48.618 write: IOPS=1289, BW=322MiB/s (338MB/s)(3239MiB/10049msec); 0 zone resets 00:23:48.618 slat (usec): min=15, max=36580, avg=731.68, stdev=2319.15 00:23:48.618 clat (msec): min=3, max=108, avg=48.89, stdev=21.61 00:23:48.618 lat (msec): min=3, max=108, avg=49.62, stdev=22.03 00:23:48.618 clat percentiles (msec): 00:23:48.618 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 29], 00:23:48.618 | 30.00th=[ 34], 40.00th=[ 46], 50.00th=[ 52], 60.00th=[ 55], 00:23:48.618 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 79], 95.00th=[ 81], 00:23:48.618 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 106], 99.95th=[ 107], 00:23:48.618 | 99.99th=[ 109] 00:23:48.618 bw ( KiB/s): min=196096, max=973824, per=9.76%, avg=330060.80, stdev=180202.28, samples=20 00:23:48.618 iops : min= 766, max= 3804, avg=1289.30, stdev=703.92, samples=20 00:23:48.618 lat (msec) : 4=0.06%, 10=0.29%, 20=15.84%, 50=28.94%, 100=54.55% 00:23:48.618 lat (msec) : 250=0.32% 00:23:48.618 cpu : usr=2.76%, sys=3.58%, ctx=2905, majf=0, minf=1 00:23:48.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:48.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.618 issued rwts: total=0,12956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.618 job2: (groupid=0, jobs=1): err= 0: pid=4148330: Mon Jun 10 10:51:16 2024 00:23:48.618 write: IOPS=1280, BW=320MiB/s (336MB/s)(3223MiB/10067msec); 0 zone resets 00:23:48.618 slat (usec): min=15, max=78152, avg=713.09, stdev=3349.46 00:23:48.618 clat (usec): min=668, max=173518, avg=49243.59, stdev=28410.21 00:23:48.618 lat (usec): min=956, max=186052, avg=49956.68, stdev=28991.47 00:23:48.618 clat percentiles (msec): 00:23:48.618 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 17], 20.00th=[ 18], 00:23:48.618 | 30.00th=[ 30], 40.00th=[ 34], 50.00th=[ 48], 60.00th=[ 66], 00:23:48.618 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 92], 00:23:48.618 | 99.00th=[ 109], 99.50th=[ 114], 99.90th=[ 144], 99.95th=[ 146], 00:23:48.618 | 99.99th=[ 146] 00:23:48.618 bw ( KiB/s): min=160768, max=716288, per=9.71%, avg=328398.75, stdev=163311.20, samples=20 00:23:48.618 iops : min= 628, max= 2798, avg=1282.80, stdev=637.94, samples=20 00:23:48.619 lat (usec) : 750=0.01%, 1000=0.01% 00:23:48.619 lat (msec) : 2=0.32%, 4=0.58%, 10=3.26%, 20=22.32%, 50=24.59% 00:23:48.619 lat (msec) : 100=45.97%, 250=2.95% 00:23:48.619 cpu : usr=2.27%, sys=3.63%, ctx=2753, majf=0, minf=1 00:23:48.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:48.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.619 issued rwts: total=0,12892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.619 job3: (groupid=0, jobs=1): err= 0: pid=4148335: Mon Jun 10 10:51:16 2024 00:23:48.619 write: IOPS=1050, BW=263MiB/s (275MB/s)(2644MiB/10067msec); 0 zone resets 00:23:48.619 slat (usec): min=17, max=60648, avg=890.22, stdev=3609.12 00:23:48.619 clat (usec): min=556, max=154922, avg=60003.09, stdev=24741.87 00:23:48.619 lat (usec): min=1805, max=155449, avg=60893.31, stdev=25329.35 00:23:48.619 clat percentiles (msec): 00:23:48.619 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 35], 00:23:48.619 | 30.00th=[ 48], 40.00th=[ 59], 50.00th=[ 67], 60.00th=[ 71], 00:23:48.619 | 70.00th=[ 75], 80.00th=[ 78], 90.00th=[ 89], 95.00th=[ 93], 00:23:48.619 | 99.00th=[ 113], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:23:48.619 | 99.99th=[ 155] 00:23:48.619 bw ( KiB/s): min=164352, max=659456, per=7.96%, avg=269098.80, stdev=115065.01, samples=20 00:23:48.619 iops : min= 642, max= 2576, avg=1051.15, stdev=449.46, samples=20 00:23:48.619 lat (usec) : 750=0.01% 00:23:48.619 lat (msec) : 2=0.05%, 4=0.34%, 10=1.14%, 20=9.86%, 50=20.20% 00:23:48.619 lat (msec) : 100=65.66%, 250=2.74% 00:23:48.619 cpu : usr=2.02%, sys=3.08%, ctx=2288, majf=0, minf=1 00:23:48.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:48.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.619 issued rwts: total=0,10576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.619 job4: (groupid=0, jobs=1): err= 0: pid=4148337: Mon Jun 10 10:51:16 2024 00:23:48.619 write: IOPS=1918, BW=480MiB/s (503MB/s)(4801MiB/10013msec); 0 zone resets 00:23:48.619 slat (usec): min=13, max=57027, avg=493.18, stdev=1869.58 00:23:48.619 clat (usec): min=716, max=144172, avg=32863.35, stdev=22915.18 00:23:48.619 lat (usec): min=759, max=148435, avg=33356.52, stdev=23298.20 00:23:48.619 clat percentiles (msec): 00:23:48.619 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 16], 00:23:48.619 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 30], 60.00th=[ 34], 00:23:48.619 | 70.00th=[ 36], 80.00th=[ 50], 90.00th=[ 70], 95.00th=[ 84], 00:23:48.619 | 99.00th=[ 103], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 124], 00:23:48.619 | 99.99th=[ 144] 00:23:48.619 bw ( KiB/s): min=164864, max=1048576, per=13.67%, avg=462119.11, stdev=292972.58, samples=19 00:23:48.619 iops : min= 644, max= 4096, avg=1805.11, stdev=1144.46, samples=19 00:23:48.619 lat (usec) : 750=0.01%, 1000=0.01% 00:23:48.619 lat (msec) : 2=0.15%, 4=0.41%, 10=0.82%, 20=46.47%, 50=32.28% 00:23:48.619 lat (msec) : 100=18.72%, 250=1.14% 00:23:48.619 cpu : usr=3.17%, sys=4.81%, ctx=3646, majf=0, minf=1 00:23:48.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:48.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.619 issued rwts: total=0,19205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.619 job5: (groupid=0, jobs=1): err= 0: pid=4148346: Mon Jun 10 10:51:16 2024 00:23:48.619 write: IOPS=1298, BW=325MiB/s (340MB/s)(3263MiB/10053msec); 0 zone resets 00:23:48.619 slat (usec): min=15, max=40360, avg=728.34, stdev=2484.02 00:23:48.619 clat (msec): min=2, max=121, avg=48.56, stdev=23.23 00:23:48.619 lat (msec): min=2, max=123, avg=49.29, stdev=23.67 00:23:48.619 clat percentiles (msec): 00:23:48.619 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:23:48.619 | 30.00th=[ 34], 40.00th=[ 47], 50.00th=[ 53], 60.00th=[ 59], 00:23:48.619 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 79], 95.00th=[ 82], 00:23:48.619 | 99.00th=[ 95], 99.50th=[ 101], 99.90th=[ 106], 99.95th=[ 116], 00:23:48.619 | 99.99th=[ 120] 00:23:48.619 bw ( KiB/s): min=183296, max=867328, per=9.84%, avg=332467.20, stdev=186062.17, samples=20 00:23:48.619 iops : min= 716, max= 3388, avg=1298.70, stdev=726.81, samples=20 00:23:48.619 lat (msec) : 4=0.02%, 10=0.21%, 20=22.44%, 50=21.36%, 100=55.55% 00:23:48.619 lat (msec) : 250=0.42% 00:23:48.619 cpu : usr=2.64%, sys=3.40%, ctx=2735, majf=0, minf=1 00:23:48.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:48.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.619 issued rwts: total=0,13050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.619 job6: (groupid=0, jobs=1): err= 0: pid=4148354: Mon Jun 10 10:51:16 2024 00:23:48.619 write: IOPS=1132, BW=283MiB/s (297MB/s)(2851MiB/10069msec); 0 zone resets 00:23:48.619 slat (usec): min=11, max=51164, avg=773.27, stdev=3141.27 00:23:48.619 clat (usec): min=420, max=156302, avg=55725.69, stdev=28943.35 00:23:48.619 lat (usec): min=480, max=164888, avg=56498.96, stdev=29486.42 00:23:48.619 clat percentiles (usec): 00:23:48.619 | 1.00th=[ 1045], 5.00th=[ 2376], 10.00th=[ 4686], 20.00th=[ 26084], 00:23:48.619 | 30.00th=[ 49546], 40.00th=[ 54264], 50.00th=[ 63701], 60.00th=[ 69731], 00:23:48.619 | 70.00th=[ 74974], 80.00th=[ 78119], 90.00th=[ 86508], 95.00th=[ 91751], 00:23:48.619 | 99.00th=[105382], 99.50th=[108528], 99.90th=[147850], 99.95th=[152044], 00:23:48.619 | 99.99th=[156238] 00:23:48.619 bw ( KiB/s): min=173056, max=569856, per=8.59%, avg=290321.65, stdev=102751.32, samples=20 00:23:48.619 iops : min= 676, max= 2226, avg=1134.05, stdev=401.34, samples=20 00:23:48.619 lat (usec) : 500=0.04%, 750=0.42%, 1000=0.50% 00:23:48.619 lat (msec) : 2=2.66%, 4=5.50%, 10=6.96%, 20=3.66%, 50=11.10% 00:23:48.619 lat (msec) : 100=67.65%, 250=1.52% 00:23:48.619 cpu : usr=4.82%, sys=3.76%, ctx=2774, majf=0, minf=1 00:23:48.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:48.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.619 issued rwts: total=0,11402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.619 job7: (groupid=0, jobs=1): err= 0: pid=4148360: Mon Jun 10 10:51:16 2024 00:23:48.619 write: IOPS=1033, BW=258MiB/s (271MB/s)(2596MiB/10049msec); 0 zone resets 00:23:48.619 slat (usec): min=19, max=57949, avg=929.21, stdev=3584.80 00:23:48.619 clat (usec): min=615, max=161224, avg=60995.04, stdev=21717.77 00:23:48.619 lat (usec): min=912, max=167944, avg=61924.25, stdev=22286.15 00:23:48.619 clat percentiles (msec): 00:23:48.619 | 1.00th=[ 3], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 41], 00:23:48.619 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 68], 00:23:48.619 | 70.00th=[ 74], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 94], 00:23:48.619 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 127], 99.95th=[ 140], 00:23:48.619 | 99.99th=[ 161] 00:23:48.619 bw ( KiB/s): min=186368, max=452096, per=7.81%, avg=264166.40, stdev=74362.92, samples=20 00:23:48.619 iops : min= 728, max= 1766, avg=1031.90, stdev=290.48, samples=20 00:23:48.619 lat (usec) : 750=0.02%, 1000=0.04% 00:23:48.619 lat (msec) : 2=0.07%, 4=1.92%, 10=0.74%, 20=1.02%, 50=22.03% 00:23:48.619 lat (msec) : 100=71.34%, 250=2.83% 00:23:48.619 cpu : usr=2.05%, sys=3.11%, ctx=2284, majf=0, minf=1 00:23:48.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:48.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.619 issued rwts: total=0,10383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.619 job8: (groupid=0, jobs=1): err= 0: pid=4148386: Mon Jun 10 10:51:16 2024 00:23:48.619 write: IOPS=1104, BW=276MiB/s (290MB/s)(2781MiB/10067msec); 0 zone resets 00:23:48.619 slat (usec): min=20, max=50530, avg=835.27, stdev=2678.08 00:23:48.619 clat (usec): min=499, max=150001, avg=57065.83, stdev=20259.96 00:23:48.619 lat (usec): min=554, max=150043, avg=57901.09, stdev=20670.59 00:23:48.619 clat percentiles (msec): 00:23:48.619 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 37], 00:23:48.619 | 30.00th=[ 44], 40.00th=[ 51], 50.00th=[ 54], 60.00th=[ 64], 00:23:48.619 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 84], 95.00th=[ 91], 00:23:48.619 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 122], 99.95th=[ 150], 00:23:48.619 | 99.99th=[ 150] 00:23:48.619 bw ( KiB/s): min=181248, max=498688, per=8.38%, avg=283136.00, stdev=83109.35, samples=20 00:23:48.619 iops : min= 708, max= 1948, avg=1106.00, stdev=324.65, samples=20 00:23:48.619 lat (usec) : 500=0.01%, 1000=0.10% 00:23:48.619 lat (msec) : 2=0.06%, 4=0.18%, 10=0.51%, 20=0.67%, 50=38.02% 00:23:48.619 lat (msec) : 100=59.30%, 250=1.14% 00:23:48.619 cpu : usr=2.40%, sys=3.51%, ctx=2581, majf=0, minf=1 00:23:48.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:48.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.619 issued rwts: total=0,11123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.619 job9: (groupid=0, jobs=1): err= 0: pid=4148395: Mon Jun 10 10:51:16 2024 00:23:48.619 write: IOPS=977, BW=244MiB/s (256MB/s)(2460MiB/10067msec); 0 zone resets 00:23:48.619 slat (usec): min=21, max=41480, avg=975.96, stdev=3428.56 00:23:48.619 clat (msec): min=5, max=145, avg=64.48, stdev=21.39 00:23:48.619 lat (msec): min=6, max=164, avg=65.46, stdev=21.95 00:23:48.619 clat percentiles (msec): 00:23:48.619 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 43], 00:23:48.619 | 30.00th=[ 52], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 74], 00:23:48.619 | 70.00th=[ 77], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 95], 00:23:48.619 | 99.00th=[ 113], 99.50th=[ 125], 99.90th=[ 134], 99.95th=[ 136], 00:23:48.619 | 99.99th=[ 146] 00:23:48.620 bw ( KiB/s): min=163328, max=425472, per=7.40%, avg=250240.00, stdev=72262.18, samples=20 00:23:48.620 iops : min= 638, max= 1662, avg=977.50, stdev=282.27, samples=20 00:23:48.620 lat (msec) : 10=0.25%, 20=1.36%, 50=25.40%, 100=69.98%, 250=3.01% 00:23:48.620 cpu : usr=1.99%, sys=3.10%, ctx=2113, majf=0, minf=1 00:23:48.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:48.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.620 issued rwts: total=0,9839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.620 job10: (groupid=0, jobs=1): err= 0: pid=4148402: Mon Jun 10 10:51:16 2024 00:23:48.620 write: IOPS=1092, BW=273MiB/s (286MB/s)(2745MiB/10049msec); 0 zone resets 00:23:48.620 slat (usec): min=18, max=63698, avg=883.82, stdev=3789.12 00:23:48.620 clat (msec): min=5, max=166, avg=57.67, stdev=23.03 00:23:48.620 lat (msec): min=5, max=166, avg=58.55, stdev=23.64 00:23:48.620 clat percentiles (msec): 00:23:48.620 | 1.00th=[ 17], 5.00th=[ 19], 10.00th=[ 31], 20.00th=[ 36], 00:23:48.620 | 30.00th=[ 39], 40.00th=[ 53], 50.00th=[ 63], 60.00th=[ 65], 00:23:48.620 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 86], 95.00th=[ 94], 00:23:48.620 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 133], 99.95th=[ 140], 00:23:48.620 | 99.99th=[ 167] 00:23:48.620 bw ( KiB/s): min=167424, max=499712, per=8.27%, avg=279475.20, stdev=101736.46, samples=20 00:23:48.620 iops : min= 654, max= 1952, avg=1091.70, stdev=397.41, samples=20 00:23:48.620 lat (msec) : 10=0.09%, 20=6.89%, 50=30.42%, 100=59.56%, 250=3.03% 00:23:48.620 cpu : usr=2.41%, sys=2.98%, ctx=2375, majf=0, minf=1 00:23:48.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:48.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:48.620 issued rwts: total=0,10980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.620 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:48.620 00:23:48.620 Run status group 0 (all jobs): 00:23:48.620 WRITE: bw=3301MiB/s (3462MB/s), 244MiB/s-480MiB/s (256MB/s-503MB/s), io=32.5GiB (34.9GB), run=10013-10070msec 00:23:48.620 00:23:48.620 Disk stats (read/write): 00:23:48.620 nvme0n1: ios=49/20977, merge=0/0, ticks=11/1231132, in_queue=1231143, util=97.72% 00:23:48.620 nvme10n1: ios=0/25744, merge=0/0, ticks=0/1233712, in_queue=1233712, util=97.78% 00:23:48.620 nvme11n1: ios=0/25657, merge=0/0, ticks=0/1235396, in_queue=1235396, util=97.89% 00:23:48.620 nvme2n1: ios=0/21011, merge=0/0, ticks=0/1232809, in_queue=1232809, util=98.01% 00:23:48.620 nvme3n1: ios=0/38087, merge=0/0, ticks=0/1235981, in_queue=1235981, util=98.03% 00:23:48.620 nvme4n1: ios=0/25934, merge=0/0, ticks=0/1234962, in_queue=1234962, util=98.36% 00:23:48.620 nvme5n1: ios=0/22644, merge=0/0, ticks=0/1231255, in_queue=1231255, util=98.45% 00:23:48.620 nvme6n1: ios=0/20621, merge=0/0, ticks=0/1235163, in_queue=1235163, util=98.52% 00:23:48.620 nvme7n1: ios=0/22054, merge=0/0, ticks=0/1231648, in_queue=1231648, util=98.83% 00:23:48.620 nvme8n1: ios=0/19500, merge=0/0, ticks=0/1230514, in_queue=1230514, util=98.95% 00:23:48.620 nvme9n1: ios=0/21725, merge=0/0, ticks=0/1232638, in_queue=1232638, util=99.04% 00:23:48.620 10:51:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:23:48.620 10:51:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:23:48.620 10:51:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:48.620 10:51:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:48.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK1 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK1 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:48.620 10:51:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:49.553 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK2 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK2 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:49.553 10:51:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:50.488 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK3 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK3 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.488 10:51:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:51.424 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:51.424 10:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:51.424 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:51.424 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:51.424 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK4 00:23:51.424 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:51.424 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK4 00:23:51.424 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:51.425 10:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:51.425 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.425 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:51.425 10:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.425 10:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:51.425 10:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:52.358 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK5 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK5 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:52.358 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.359 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:52.359 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.359 10:51:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.359 10:51:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:52.925 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:52.925 10:51:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK6 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK6 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.926 10:51:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:53.860 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK7 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK7 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.860 10:51:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:54.794 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK8 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK8 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.794 10:51:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:55.728 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK9 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK9 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.728 10:51:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:56.663 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK10 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK10 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:56.663 10:51:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:57.598 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK11 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK11 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:23:57.598 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:57.599 rmmod nvme_rdma 00:23:57.599 rmmod nvme_fabrics 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 4141758 ']' 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 4141758 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@949 -- # '[' -z 4141758 ']' 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@953 -- # kill -0 4141758 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@954 -- # uname 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4141758 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4141758' 00:23:57.599 killing process with pid 4141758 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@968 -- # kill 4141758 00:23:57.599 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@973 -- # wait 4141758 00:23:57.858 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:57.858 10:51:26 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:57.858 00:23:57.858 real 1m3.563s 00:23:57.858 user 4m6.720s 00:23:57.858 sys 0m16.172s 00:23:57.858 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:57.858 10:51:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:57.858 ************************************ 00:23:57.858 END TEST nvmf_multiconnection 00:23:57.858 ************************************ 00:23:58.116 10:51:26 nvmf_rdma -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:58.116 10:51:26 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:58.116 10:51:26 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:58.116 10:51:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:58.116 ************************************ 00:23:58.116 START TEST nvmf_initiator_timeout 00:23:58.116 ************************************ 00:23:58.116 10:51:26 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:58.116 * Looking for test storage... 00:23:58.116 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:58.116 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.117 10:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:04.719 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:04.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # modinfo irdma 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.719 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:04.719 Found net devices under 0000:af:00.0: cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:04.720 Found net devices under 0000:af:00.1: cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:24:04.720 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:04.720 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:24:04.720 altname enp175s0f0np0 00:24:04.720 altname ens801f0np0 00:24:04.720 inet 192.168.100.8/24 scope global cvl_0_0 00:24:04.720 valid_lft forever preferred_lft forever 00:24:04.720 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:24:04.720 valid_lft forever preferred_lft forever 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:24:04.720 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:04.720 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:24:04.720 altname enp175s0f1np1 00:24:04.720 altname ens801f1np1 00:24:04.720 inet 192.168.100.9/24 scope global cvl_0_1 00:24:04.720 valid_lft forever preferred_lft forever 00:24:04.720 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:24:04.720 valid_lft forever preferred_lft forever 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:04.720 192.168.100.9' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:04.720 192.168.100.9' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:04.720 192.168.100.9' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:04.720 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=4155349 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 4155349 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@830 -- # '[' -z 4155349 ']' 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:04.721 10:51:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 [2024-06-10 10:51:32.769447] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:24:04.721 [2024-06-10 10:51:32.769493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.721 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.721 [2024-06-10 10:51:32.829403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.721 [2024-06-10 10:51:32.912204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.721 [2024-06-10 10:51:32.912240] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.721 [2024-06-10 10:51:32.912247] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.721 [2024-06-10 10:51:32.912252] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.721 [2024-06-10 10:51:32.912257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.721 [2024-06-10 10:51:32.912304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.721 [2024-06-10 10:51:32.912400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.721 [2024-06-10 10:51:32.912465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.721 [2024-06-10 10:51:32.912467] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@863 -- # return 0 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 Malloc0 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 Delay0 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 [2024-06-10 10:51:33.662259] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1e8d250/0x1e8c890) succeed. 00:24:04.721 [2024-06-10 10:51:33.671264] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1e8e540/0x1e8ce10) succeed. 00:24:04.721 [2024-06-10 10:51:33.671285] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.721 [2024-06-10 10:51:33.703572] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.721 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:04.980 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:04.980 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local i=0 00:24:04.980 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:04.980 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:04.980 10:51:33 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # sleep 2 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # return 0 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4155841 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:07.514 10:51:35 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:07.514 [global] 00:24:07.514 thread=1 00:24:07.514 invalidate=1 00:24:07.514 rw=write 00:24:07.514 time_based=1 00:24:07.514 runtime=60 00:24:07.514 ioengine=libaio 00:24:07.514 direct=1 00:24:07.514 bs=4096 00:24:07.514 iodepth=1 00:24:07.514 norandommap=0 00:24:07.514 numjobs=1 00:24:07.514 00:24:07.514 verify_dump=1 00:24:07.514 verify_backlog=512 00:24:07.514 verify_state_save=0 00:24:07.514 do_verify=1 00:24:07.514 verify=crc32c-intel 00:24:07.514 [job0] 00:24:07.514 filename=/dev/nvme0n1 00:24:07.514 Could not set queue depth (nvme0n1) 00:24:07.514 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:07.514 fio-3.35 00:24:07.514 Starting 1 thread 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:10.046 true 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:10.046 true 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:10.046 true 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:10.046 true 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.046 10:51:38 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:13.332 10:51:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:13.332 10:51:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.332 10:51:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 true 00:24:13.332 10:51:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.332 10:51:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:13.332 10:51:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.332 10:51:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 true 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 true 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 true 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:13.332 10:51:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4155841 00:25:09.560 00:25:09.560 job0: (groupid=0, jobs=1): err= 0: pid=4155958: Mon Jun 10 10:52:36 2024 00:25:09.560 read: IOPS=1331, BW=5325KiB/s (5453kB/s)(312MiB/60000msec) 00:25:09.560 slat (nsec): min=1758, max=820791, avg=7274.81, stdev=3069.30 00:25:09.560 clat (usec): min=42, max=285, avg=105.52, stdev= 5.89 00:25:09.560 lat (usec): min=86, max=928, avg=112.80, stdev= 6.68 00:25:09.560 clat percentiles (usec): 00:25:09.560 | 1.00th=[ 94], 5.00th=[ 97], 10.00th=[ 98], 20.00th=[ 101], 00:25:09.560 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 108], 00:25:09.560 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 114], 95.00th=[ 116], 00:25:09.560 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 131], 00:25:09.560 | 99.99th=[ 151] 00:25:09.560 write: IOPS=1338, BW=5353KiB/s (5481kB/s)(314MiB/60000msec); 0 zone resets 00:25:09.560 slat (usec): min=3, max=7839, avg= 9.72, stdev=34.75 00:25:09.560 clat (usec): min=72, max=41520k, avg=621.20, stdev=146530.14 00:25:09.560 lat (usec): min=89, max=41520k, avg=630.93, stdev=146530.14 00:25:09.560 clat percentiles (usec): 00:25:09.560 | 1.00th=[ 93], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 99], 00:25:09.560 | 30.00th=[ 101], 40.00th=[ 102], 50.00th=[ 104], 60.00th=[ 105], 00:25:09.560 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 112], 95.00th=[ 115], 00:25:09.560 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 133], 00:25:09.560 | 99.99th=[ 174] 00:25:09.560 bw ( KiB/s): min= 4616, max=18928, per=100.00%, avg=16932.76, stdev=2444.34, samples=37 00:25:09.560 iops : min= 1154, max= 4732, avg=4233.19, stdev=611.09, samples=37 00:25:09.560 lat (usec) : 50=0.01%, 100=21.64%, 250=78.35%, 500=0.01% 00:25:09.560 lat (msec) : 10=0.01%, >=2000=0.01% 00:25:09.561 cpu : usr=1.54%, sys=2.81%, ctx=160168, majf=0, minf=107 00:25:09.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:09.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.561 issued rwts: total=79872,80288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:09.561 00:25:09.561 Run status group 0 (all jobs): 00:25:09.561 READ: bw=5325KiB/s (5453kB/s), 5325KiB/s-5325KiB/s (5453kB/s-5453kB/s), io=312MiB (327MB), run=60000-60000msec 00:25:09.561 WRITE: bw=5353KiB/s (5481kB/s), 5353KiB/s-5353KiB/s (5481kB/s-5481kB/s), io=314MiB (329MB), run=60000-60000msec 00:25:09.561 00:25:09.561 Disk stats (read/write): 00:25:09.561 nvme0n1: ios=79729/79872, merge=0/0, ticks=7856/7630, in_queue=15486, util=99.49% 00:25:09.561 10:52:36 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:09.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # local i=0 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1230 -- # return 0 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:09.561 nvmf hotplug test: fio successful as expected 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:09.561 rmmod nvme_rdma 00:25:09.561 rmmod nvme_fabrics 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 4155349 ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 4155349 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@949 -- # '[' -z 4155349 ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # kill -0 4155349 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # uname 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4155349 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4155349' 00:25:09.561 killing process with pid 4155349 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # kill 4155349 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # wait 4155349 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:09.561 00:25:09.561 real 1m10.664s 00:25:09.561 user 4m26.199s 00:25:09.561 sys 0m6.632s 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:09.561 10:52:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:09.561 ************************************ 00:25:09.561 END TEST nvmf_initiator_timeout 00:25:09.561 ************************************ 00:25:09.561 10:52:37 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:25:09.561 10:52:37 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:25:09.561 10:52:37 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:25:09.561 10:52:37 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:25:09.561 10:52:37 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:09.561 10:52:37 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:09.561 10:52:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:09.561 ************************************ 00:25:09.561 START TEST nvmf_device_removal 00:25:09.561 ************************************ 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1124 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:25:09.561 * Looking for test storage... 00:25:09.561 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output ']' 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh ]] 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:25:09.561 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/config.h ]] 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:25:09.562 #define SPDK_CONFIG_H 00:25:09.562 #define SPDK_CONFIG_APPS 1 00:25:09.562 #define SPDK_CONFIG_ARCH native 00:25:09.562 #undef SPDK_CONFIG_ASAN 00:25:09.562 #undef SPDK_CONFIG_AVAHI 00:25:09.562 #undef SPDK_CONFIG_CET 00:25:09.562 #define SPDK_CONFIG_COVERAGE 1 00:25:09.562 #define SPDK_CONFIG_CROSS_PREFIX 00:25:09.562 #undef SPDK_CONFIG_CRYPTO 00:25:09.562 #undef SPDK_CONFIG_CRYPTO_MLX5 00:25:09.562 #undef SPDK_CONFIG_CUSTOMOCF 00:25:09.562 #undef SPDK_CONFIG_DAOS 00:25:09.562 #define SPDK_CONFIG_DAOS_DIR 00:25:09.562 #define SPDK_CONFIG_DEBUG 1 00:25:09.562 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:25:09.562 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:25:09.562 #define SPDK_CONFIG_DPDK_INC_DIR 00:25:09.562 #define SPDK_CONFIG_DPDK_LIB_DIR 00:25:09.562 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:25:09.562 #undef SPDK_CONFIG_DPDK_UADK 00:25:09.562 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:25:09.562 #define SPDK_CONFIG_EXAMPLES 1 00:25:09.562 #undef SPDK_CONFIG_FC 00:25:09.562 #define SPDK_CONFIG_FC_PATH 00:25:09.562 #define SPDK_CONFIG_FIO_PLUGIN 1 00:25:09.562 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:25:09.562 #undef SPDK_CONFIG_FUSE 00:25:09.562 #undef SPDK_CONFIG_FUZZER 00:25:09.562 #define SPDK_CONFIG_FUZZER_LIB 00:25:09.562 #undef SPDK_CONFIG_GOLANG 00:25:09.562 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:25:09.562 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:25:09.562 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:25:09.562 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:25:09.562 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:25:09.562 #undef SPDK_CONFIG_HAVE_LIBBSD 00:25:09.562 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:25:09.562 #define SPDK_CONFIG_IDXD 1 00:25:09.562 #define SPDK_CONFIG_IDXD_KERNEL 1 00:25:09.562 #undef SPDK_CONFIG_IPSEC_MB 00:25:09.562 #define SPDK_CONFIG_IPSEC_MB_DIR 00:25:09.562 #define SPDK_CONFIG_ISAL 1 00:25:09.562 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:25:09.562 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:25:09.562 #define SPDK_CONFIG_LIBDIR 00:25:09.562 #undef SPDK_CONFIG_LTO 00:25:09.562 #define SPDK_CONFIG_MAX_LCORES 00:25:09.562 #define SPDK_CONFIG_NVME_CUSE 1 00:25:09.562 #undef SPDK_CONFIG_OCF 00:25:09.562 #define SPDK_CONFIG_OCF_PATH 00:25:09.562 #define SPDK_CONFIG_OPENSSL_PATH 00:25:09.562 #undef SPDK_CONFIG_PGO_CAPTURE 00:25:09.562 #define SPDK_CONFIG_PGO_DIR 00:25:09.562 #undef SPDK_CONFIG_PGO_USE 00:25:09.562 #define SPDK_CONFIG_PREFIX /usr/local 00:25:09.562 #undef SPDK_CONFIG_RAID5F 00:25:09.562 #undef SPDK_CONFIG_RBD 00:25:09.562 #define SPDK_CONFIG_RDMA 1 00:25:09.562 #define SPDK_CONFIG_RDMA_PROV verbs 00:25:09.562 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:25:09.562 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:25:09.562 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:25:09.562 #define SPDK_CONFIG_SHARED 1 00:25:09.562 #undef SPDK_CONFIG_SMA 00:25:09.562 #define SPDK_CONFIG_TESTS 1 00:25:09.562 #undef SPDK_CONFIG_TSAN 00:25:09.562 #define SPDK_CONFIG_UBLK 1 00:25:09.562 #define SPDK_CONFIG_UBSAN 1 00:25:09.562 #undef SPDK_CONFIG_UNIT_TESTS 00:25:09.562 #undef SPDK_CONFIG_URING 00:25:09.562 #define SPDK_CONFIG_URING_PATH 00:25:09.562 #undef SPDK_CONFIG_URING_ZNS 00:25:09.562 #undef SPDK_CONFIG_USDT 00:25:09.562 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:25:09.562 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:25:09.562 #undef SPDK_CONFIG_VFIO_USER 00:25:09.562 #define SPDK_CONFIG_VFIO_USER_DIR 00:25:09.562 #define SPDK_CONFIG_VHOST 1 00:25:09.562 #define SPDK_CONFIG_VIRTIO 1 00:25:09.562 #undef SPDK_CONFIG_VTUNE 00:25:09.562 #define SPDK_CONFIG_VTUNE_DIR 00:25:09.562 #define SPDK_CONFIG_WERROR 1 00:25:09.562 #define SPDK_CONFIG_WPDK_DIR 00:25:09.562 #undef SPDK_CONFIG_XNVME 00:25:09.562 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.562 10:52:37 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/../../../ 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.run_test_name 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power ]] 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # : 1 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # : 1 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # : 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # : 1 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # : 1 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # : rdma 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # : 1 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # : 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # : 0 00:25:09.563 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # : 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # : true 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # : e810 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # : 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # : 0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@200 -- # cat 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # export valgrind= 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # valgrind= 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # uname -s 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:25:09.564 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKE=make 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # TEST_MODE= 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # for i in "$@" 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@301 -- # case "$i" in 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # [[ -z 4165820 ]] 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # kill -0 4165820 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@331 -- # local mount target_dir 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.E94ppo 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target /tmp/spdk.E94ppo/tests/target /tmp/spdk.E94ppo 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # df -T 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:25:09.565 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=900243456 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4384186368 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=89509560320 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=95562715136 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=6053154816 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=47771435008 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=47781355520 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=19089309696 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=19112546304 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=23236608 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=47780855808 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=47781359616 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=503808 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=9556267008 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=9556271104 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:25:09.566 * Looking for test storage... 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # local target_space new_size 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # mount=/ 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # target_space=89509560320 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # new_size=8267747328 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:09.566 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:09.567 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@389 -- # return 0 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1681 -- # set -o errtrace 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1686 -- # true 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1688 -- # xtrace_fd 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:25:09.567 10:52:37 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:14.839 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:14.839 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@377 -- # modinfo irdma 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:14.839 Found net devices under 0000:af:00.0: cvl_0_0 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:14.839 Found net devices under 0000:af:00.1: cvl_0_1 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo cvl_0_0 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo cvl_0_1 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:25:14.839 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:25:14.840 16: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:14.840 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:25:14.840 altname enp175s0f0np0 00:25:14.840 altname ens801f0np0 00:25:14.840 inet 192.168.100.8/24 scope global cvl_0_0 00:25:14.840 valid_lft forever preferred_lft forever 00:25:14.840 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:25:14.840 valid_lft forever preferred_lft forever 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:25:14.840 17: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:25:14.840 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:25:14.840 altname enp175s0f1np1 00:25:14.840 altname ens801f1np1 00:25:14.840 inet 192.168.100.9/24 scope global cvl_0_1 00:25:14.840 valid_lft forever preferred_lft forever 00:25:14.840 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:25:14.840 valid_lft forever preferred_lft forever 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo cvl_0_0 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo cvl_0_1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:14.840 192.168.100.9' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:14.840 192.168.100.9' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:14.840 192.168.100.9' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:14.840 10:52:43 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:25:15.120 ************************************ 00:25:15.120 START TEST nvmf_device_removal_pci_remove_no_srq 00:25:15.120 ************************************ 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1124 -- # test_remove_and_rescan --no-srq 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=4169328 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 4169328 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 4169328 ']' 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:15.120 10:52:43 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.120 [2024-06-10 10:52:43.920263] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:25:15.120 [2024-06-10 10:52:43.920301] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.120 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.120 [2024-06-10 10:52:43.980003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:15.120 [2024-06-10 10:52:44.056839] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.120 [2024-06-10 10:52:44.056871] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.120 [2024-06-10 10:52:44.056878] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.120 [2024-06-10 10:52:44.056884] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.120 [2024-06-10 10:52:44.056889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.120 [2024-06-10 10:52:44.056931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.120 [2024-06-10 10:52:44.056933] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:25:15.718 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.979 [2024-06-10 10:52:44.767744] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x7812d0/0x780910) succeed. 00:25:15.979 [2024-06-10 10:52:44.776562] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x782580/0x780e90) succeed. 00:25:15.979 [2024-06-10 10:52:44.776584] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_cvl_0_0 -a -s SPDK000cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_cvl_0_0 cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_cvl_0_0 -t rdma -a 192.168.100.8 -s 4420 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.979 [2024-06-10 10:52:44.906699] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=cvl_0_0 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b cvl_0_1 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.979 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_cvl_0_1 -a -s SPDK000cvl_0_1 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_cvl_0_1 cvl_0_1 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_cvl_0_1 -t rdma -a 192.168.100.9 -s 4420 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:15.980 [2024-06-10 10:52:44.989665] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=cvl_0_1 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf cvl_0_0 cvl_0_1 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('cvl_0_0' 'cvl_0_1') 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:25:15.980 10:52:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=4169517 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 4169517 /var/tmp/bdevperf.sock 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 4169517 ']' 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:15.980 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn cvl_0_0 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_0 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_0 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address cvl_0_0 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_cvl_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_cvl_0_0 -l -1 -o 1 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.915 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:17.173 Nvme_cvl_0_0n1 00:25:17.173 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.173 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:25:17.173 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn cvl_0_1 00:25:17.173 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_1 00:25:17.173 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_1 00:25:17.173 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address cvl_0_1 00:25:17.173 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:25:17.173 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:25:17.174 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:17.174 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:17.174 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:25:17.174 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_cvl_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_cvl_0_1 -l -1 -o 1 00:25:17.174 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:17.174 10:52:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:17.174 Nvme_cvl_0_1n1 00:25:17.174 10:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:17.174 10:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=4169625 00:25:17.174 10:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:25:17.174 10:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:22.445 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:25:22.445 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=cvl_0_0 00:25:22.445 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name cvl_0_0 00:25:22.445 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0/infiniband 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=rocep175s0f0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep rocep175s0f0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.446 rocep175s0f0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:25:22.446 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:25:22.446 [2024-06-10 10:52:51.177289] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device rocep175s0f0 is being removed. 00:25:22.446 [2024-06-10 10:52:51.177745] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:25:22.446 [2024-06-10 10:52:51.179154] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:25:22.446 [2024-06-10 10:52:51.179171] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 127 00:25:22.446 [2024-06-10 10:52:51.179177] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:25:22.446 [2024-06-10 10:52:51.179183] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179188] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179194] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179199] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179203] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179209] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179213] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179218] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179223] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179227] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179237] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179242] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179247] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179253] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179258] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179263] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179267] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179272] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179277] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179282] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179287] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179293] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179297] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179303] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179308] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179314] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179319] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179324] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179328] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179333] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179337] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179341] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179347] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179353] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179357] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179362] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179367] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179371] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179376] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179381] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179385] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179389] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179394] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179399] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179404] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179409] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179414] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179419] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179424] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179428] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179432] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179437] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179443] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179448] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179452] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179457] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179463] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179468] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.446 [2024-06-10 10:52:51.179473] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179478] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179482] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179487] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179491] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179496] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179500] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.446 [2024-06-10 10:52:51.179507] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.446 [2024-06-10 10:52:51.179512] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.446 [2024-06-10 10:52:51.179517] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179522] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179526] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179531] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179535] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179540] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179544] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179549] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179554] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179559] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179563] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179568] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179573] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179577] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179582] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179586] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179590] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179595] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179600] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179605] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179610] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179615] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179620] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179625] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179630] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179635] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179641] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179645] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179650] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179655] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179660] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179665] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179670] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179674] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179679] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179684] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179689] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179693] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179697] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179703] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179708] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179712] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179717] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179722] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179726] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179731] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179735] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179740] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179744] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179748] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179753] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179758] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179763] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179768] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179774] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179779] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179783] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179788] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179793] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179798] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179804] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179809] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179815] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179820] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179825] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179829] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179834] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179839] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179845] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179850] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179856] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179861] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179866] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179871] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179876] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179880] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179885] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179889] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179894] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179899] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179904] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179909] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179913] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179918] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179922] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179927] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179931] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179936] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179940] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179945] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179952] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.179962] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.179967] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.179972] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.179998] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.180004] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.180008] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.180013] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.180019] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.180023] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.180028] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.180033] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.180037] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.180042] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.180046] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.180051] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.180056] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.180061] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.180066] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.180071] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.180077] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.180082] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.180086] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.180091] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.447 [2024-06-10 10:52:51.180095] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.447 [2024-06-10 10:52:51.180100] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.180105] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.447 [2024-06-10 10:52:51.180112] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.447 [2024-06-10 10:52:51.180117] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180122] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180127] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180132] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180137] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180142] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180147] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180151] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180158] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180163] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180168] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180172] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180177] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180182] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180186] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180191] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180195] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180200] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180205] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180210] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180215] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180220] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180224] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180232] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180237] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180242] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180247] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180252] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180258] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180262] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180268] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180272] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180277] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180282] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180287] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180292] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180296] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180301] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180306] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180311] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180319] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180325] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180329] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180334] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180339] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180344] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180349] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180354] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180359] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180364] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180368] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180373] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180378] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180382] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180387] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180392] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180397] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180403] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180408] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180413] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180418] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180422] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180427] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:22.448 [2024-06-10 10:52:51.180432] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180436] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180441] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:25:22.448 [2024-06-10 10:52:51.180445] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:25:22.448 [2024-06-10 10:52:51.180450] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:25:22.448 [2024-06-10 10:52:51.180455] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f0 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f0 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep rocep175s0f0 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:23.016 10:52:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:23.016 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.273 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:25:23.273 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:25:23.273 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:25:23.530 [2024-06-10 10:52:52.560095] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device irdma0(0x8a0880/0x779970) succeed. 00:25:23.530 [2024-06-10 10:52:52.560167] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0/net 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=cvl_0_0 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z cvl_0_0 ]] 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ cvl_0_0 != \c\v\l\_\0\_\0 ]] 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z cvl_0_0 ]] 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set cvl_0_0 up 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address cvl_0_0 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev cvl_0_0 00:25:24.099 10:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.099 [2024-06-10 10:52:53.007822] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:24.099 [2024-06-10 10:52:53.007850] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:25:24.099 [2024-06-10 10:52:53.007861] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1/infiniband 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=rocep175s0f1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep rocep175s0f1 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:24.099 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.359 rocep175s0f1 00:25:24.359 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:25:24.359 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic cvl_0_1 00:25:24.359 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=cvl_0_1 00:25:24.359 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:25:24.359 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir cvl_0_1 00:25:24.359 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:25:24.359 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:25:24.359 [2024-06-10 10:52:53.146598] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:25:24.359 [2024-06-10 10:52:53.147550] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:25:24.359 [2024-06-10 10:52:53.151337] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device rocep175s0f1 is being removed. 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f1 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f1 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep rocep175s0f1 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:25:24.926 10:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:25:25.493 [2024-06-10 10:52:54.390379] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device irdma0(0x779db0/0x9dbc80) succeed. 00:25:25.493 [2024-06-10 10:52:54.390450] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:25:25.493 [2024-06-10 10:52:54.390472] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: port active 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1/net 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=cvl_0_1 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z cvl_0_1 ]] 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ cvl_0_1 != \c\v\l\_\0\_\1 ]] 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z cvl_0_1 ]] 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set cvl_0_1 up 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address cvl_0_1 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev cvl_0_1 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:25:26.061 [2024-06-10 10:52:54.842110] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:25:26.061 [2024-06-10 10:52:54.842142] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:25:26.061 [2024-06-10 10:52:54.842152] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:25:26.061 10:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 4169625 00:26:47.523 0 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 4169517 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 4169517 ']' 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 4169517 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4169517 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4169517' 00:26:47.523 killing process with pid 4169517 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 4169517 00:26:47.523 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 4169517 00:26:47.787 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:26:47.787 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/try.txt 00:26:47.787 [2024-06-10 10:52:45.041858] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:26:47.787 [2024-06-10 10:52:45.041904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169517 ] 00:26:47.787 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.787 [2024-06-10 10:52:45.095872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.787 [2024-06-10 10:52:45.172807] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.787 Running I/O for 90 seconds... 00:26:47.787 [2024-06-10 10:52:51.177044] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:26:47.787 [2024-06-10 10:52:51.177079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.787 [2024-06-10 10:52:51.177088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:6 sqhd:d3b9 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.177096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.787 [2024-06-10 10:52:51.177102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:6 sqhd:d3b9 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.177109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.787 [2024-06-10 10:52:51.177115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:6 sqhd:d3b9 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.177123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.787 [2024-06-10 10:52:51.177129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:6 sqhd:d3b9 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.179794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.787 [2024-06-10 10:52:51.179815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:26:47.787 [2024-06-10 10:52:51.179848] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:47.787 [2024-06-10 10:52:51.182849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:209416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.182869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.182883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:209424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.182890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.182900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:209432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.182906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.182915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:209440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.182921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.182929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:209448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.182940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.182948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:209456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.182962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.182971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:209464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.182977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.182985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:209472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.182994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:209480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.183010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:209488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.183026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:209496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.183040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:209504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.183055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:209512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.183069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:209520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.183084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:209528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.183098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:209536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x2cdd800e 00:26:47.787 [2024-06-10 10:52:51.183114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.787 [2024-06-10 10:52:51.183122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:209544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:209552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:209560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:209568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:209576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:209584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:209592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:209600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:209608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:209616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:209624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:209632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:209640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:209648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:209656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:209664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:209672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:209680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:209688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:209696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:209704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:209712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:209720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:209728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:209736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:209744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:209752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:209760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:209768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:209776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:209784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:209792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:209800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:209808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:209816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:209824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x2cdd800e 00:26:47.788 [2024-06-10 10:52:51.183650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.788 [2024-06-10 10:52:51.183658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:209832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:209840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:209848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:209856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:209864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:209872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:209880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:209888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:209896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:209904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:209912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x2cdd800e 00:26:47.789 [2024-06-10 10:52:51.183815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:209920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:209928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:209936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:209944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:209952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:209960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:209968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:209976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:209984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:209992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:210000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.183989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:210008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.183995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:210016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:210024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:210032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:210040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:210048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:210056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:210064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:210072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:210080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:210088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:210096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:210104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:210112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:210120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:210128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.789 [2024-06-10 10:52:51.184220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.789 [2024-06-10 10:52:51.184228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:210136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:210144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:210152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:210160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:210168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:210176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:210184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:210192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:210200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:210208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:210216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:210224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:210232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:210240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:210248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:210256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:210264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:210272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:210280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:210288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:210296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:210304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:210312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:210320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:210328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:210336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.184606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:210344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.184613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:210352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:210360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:210368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:210376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:210384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:210392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:210400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:210408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:210416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.197494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:210424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.790 [2024-06-10 10:52:51.197501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.211529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.790 [2024-06-10 10:52:51.211541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.790 [2024-06-10 10:52:51.211548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:210432 len:8 PRP1 0x0 PRP2 0x0 00:26:47.790 [2024-06-10 10:52:51.211555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.790 [2024-06-10 10:52:51.213170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:26:47.791 [2024-06-10 10:52:51.213473] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:26:47.791 [2024-06-10 10:52:51.213487] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:26:47.791 [2024-06-10 10:52:51.213493] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:47.791 [2024-06-10 10:52:51.213506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.791 [2024-06-10 10:52:51.213514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:26:47.791 [2024-06-10 10:52:51.213524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] Ctrlr is in error state 00:26:47.791 [2024-06-10 10:52:51.213530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] controller reinitialization failed 00:26:47.791 [2024-06-10 10:52:51.213537] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] already in failed state 00:26:47.791 [2024-06-10 10:52:51.213552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.791 [2024-06-10 10:52:51.213562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:26:47.791 [2024-06-10 10:52:53.146294] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:26:47.791 [2024-06-10 10:52:53.146325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.791 [2024-06-10 10:52:53.146333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:6 sqhd:d3b9 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.146341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.791 [2024-06-10 10:52:53.146348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:6 sqhd:d3b9 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.146356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.791 [2024-06-10 10:52:53.146362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:6 sqhd:d3b9 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.146369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.791 [2024-06-10 10:52:53.146375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32615 cdw0:6 sqhd:d3b9 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.146707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.791 [2024-06-10 10:52:53.146721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:26:47.791 [2024-06-10 10:52:53.146740] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:47.791 [2024-06-10 10:52:53.148427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:224472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:224488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:224496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:224504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:224512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:224520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:224528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:224536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:224544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:224552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:224560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:224568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:224576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:224584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:224592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:224600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:224608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:224616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.791 [2024-06-10 10:52:53.148735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:224624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x55764a75 00:26:47.791 [2024-06-10 10:52:53.148742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:224632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:224648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:224656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:224664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:224672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:224680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:224688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:224696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:224704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:224712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:224720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:224728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:224736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:224744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:224752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.148988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.148996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:224760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:224768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:224776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:224784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:224792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:224800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:224808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:224824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:224856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:224864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:224872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:224888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:224896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:224904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x55764a75 00:26:47.792 [2024-06-10 10:52:53.149275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.792 [2024-06-10 10:52:53.149283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:224920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:224928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:224936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:224944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:224952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:224960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:224968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:224976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:224992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:225000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:225008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:225016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:225024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:225032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:225040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:225048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:225056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:225064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:225072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:225080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:225088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:225096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:225104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:225112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:225120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:225128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:225136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:225144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:225152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:225160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:225168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:225176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:225184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:225192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.793 [2024-06-10 10:52:53.149811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:225200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x55764a75 00:26:47.793 [2024-06-10 10:52:53.149818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:225208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:225216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:225224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:225232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:225240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:225248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:225256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:225264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:225272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x55764a75 00:26:47.794 [2024-06-10 10:52:53.149948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:225280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.149968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:225288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.149984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.149992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:225296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.149998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:225304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:225312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:225320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:225328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:225336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:225344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:225352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:225360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:225368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:225376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:225384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:225392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:225400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:225408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:225416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:225424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:225432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:225440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:225448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:225456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:225464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:225472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.150323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:225480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.794 [2024-06-10 10:52:53.150329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32615 cdw0:ceb524f0 sqhd:0540 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.164369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.794 [2024-06-10 10:52:53.164381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.794 [2024-06-10 10:52:53.164387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:225488 len:8 PRP1 0x0 PRP2 0x0 00:26:47.794 [2024-06-10 10:52:53.164394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.794 [2024-06-10 10:52:53.164434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:26:47.794 [2024-06-10 10:52:53.164714] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:26:47.794 [2024-06-10 10:52:53.164725] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:26:47.795 [2024-06-10 10:52:53.164731] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:26:47.795 [2024-06-10 10:52:53.164744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.795 [2024-06-10 10:52:53.164754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:26:47.795 [2024-06-10 10:52:53.164766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] Ctrlr is in error state 00:26:47.795 [2024-06-10 10:52:53.164771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] controller reinitialization failed 00:26:47.795 [2024-06-10 10:52:53.164778] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] already in failed state 00:26:47.795 [2024-06-10 10:52:53.164794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.795 [2024-06-10 10:52:53.164802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:26:47.795 [2024-06-10 10:52:53.219376] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:26:47.795 [2024-06-10 10:52:53.219388] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:47.795 [2024-06-10 10:52:53.219401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.795 [2024-06-10 10:52:53.219408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:26:47.795 [2024-06-10 10:52:53.219417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] Ctrlr is in error state 00:26:47.795 [2024-06-10 10:52:53.219423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] controller reinitialization failed 00:26:47.795 [2024-06-10 10:52:53.219429] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] already in failed state 00:26:47.795 [2024-06-10 10:52:53.219443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.795 [2024-06-10 10:52:53.219449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:26:47.795 [2024-06-10 10:52:54.260217] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:47.795 [2024-06-10 10:52:55.169827] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:26:47.795 [2024-06-10 10:52:55.169852] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:26:47.795 [2024-06-10 10:52:55.169875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.795 [2024-06-10 10:52:55.169882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:26:47.795 [2024-06-10 10:52:55.169892] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] Ctrlr is in error state 00:26:47.795 [2024-06-10 10:52:55.169899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] controller reinitialization failed 00:26:47.795 [2024-06-10 10:52:55.169905] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] already in failed state 00:26:47.795 [2024-06-10 10:52:55.169922] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.795 [2024-06-10 10:52:55.169929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:26:47.795 [2024-06-10 10:52:56.228546] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:47.795 00:26:47.795 Latency(us) 00:26:47.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.795 Job: Nvme_cvl_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:47.795 Verification LBA range: start 0x0 length 0x8000 00:26:47.795 Nvme_cvl_0_0n1 : 90.01 11286.33 44.09 0.00 0.00 11321.22 1178.09 4058488.44 00:26:47.795 Job: Nvme_cvl_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:47.795 Verification LBA range: start 0x0 length 0x8000 00:26:47.795 Nvme_cvl_0_1n1 : 90.01 11231.38 43.87 0.00 0.00 11376.59 2402.99 4042510.14 00:26:47.795 =================================================================================================================== 00:26:47.795 Total : 22517.71 87.96 0.00 0.00 11348.84 1178.09 4058488.44 00:26:47.795 Received shutdown signal, test time was about 90.000000 seconds 00:26:47.795 00:26:47.795 Latency(us) 00:26:47.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.795 =================================================================================================================== 00:26:47.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/try.txt 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 4169328 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 4169328 ']' 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 4169328 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4169328 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4169328' 00:26:47.795 killing process with pid 4169328 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 4169328 00:26:47.795 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 4169328 00:26:48.055 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:26:48.055 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:26:48.055 00:26:48.055 real 1m33.123s 00:26:48.055 user 4m37.175s 00:26:48.055 sys 0m1.734s 00:26:48.055 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:48.055 10:54:16 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:48.055 ************************************ 00:26:48.055 END TEST nvmf_device_removal_pci_remove_no_srq 00:26:48.055 ************************************ 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:26:48.055 ************************************ 00:26:48.055 START TEST nvmf_device_removal_pci_remove 00:26:48.055 ************************************ 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1124 -- # test_remove_and_rescan 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=4184923 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 4184923 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 4184923 ']' 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:48.055 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:48.314 [2024-06-10 10:54:17.112623] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:26:48.314 [2024-06-10 10:54:17.112665] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.314 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.314 [2024-06-10 10:54:17.173388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:48.314 [2024-06-10 10:54:17.243825] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.314 [2024-06-10 10:54:17.243868] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.314 [2024-06-10 10:54:17.243875] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.314 [2024-06-10 10:54:17.243881] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.314 [2024-06-10 10:54:17.243886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.314 [2024-06-10 10:54:17.243936] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.314 [2024-06-10 10:54:17.243939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.882 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:48.882 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:26:48.882 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:48.882 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:48.882 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.141 [2024-06-10 10:54:17.954245] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x11d52d0/0x11d4910) succeed. 00:26:49.141 [2024-06-10 10:54:17.962761] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x11d6580/0x11d4e90) succeed. 00:26:49.141 [2024-06-10 10:54:17.962783] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo cvl_0_0 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo cvl_0_1 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev cvl_0_0 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:26:49.141 10:54:17 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_cvl_0_0 -a -s SPDK000cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_cvl_0_0 cvl_0_0 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.141 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_cvl_0_0 -t rdma -a 192.168.100.8 -s 4420 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.142 [2024-06-10 10:54:18.079636] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=cvl_0_0 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_cvl_0_1 -a -s SPDK000cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_cvl_0_1 cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_cvl_0_1 -t rdma -a 192.168.100.9 -s 4420 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:49.142 [2024-06-10 10:54:18.158223] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf cvl_0_0 cvl_0_1 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('cvl_0_0' 'cvl_0_1') 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:26:49.142 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=4185163 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 4185163 /var/tmp/bdevperf.sock 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 4185163 ']' 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:49.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:49.401 10:54:18 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:50.031 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:50.031 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:26:50.031 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:50.031 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.031 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:50.031 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn cvl_0_0 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_0 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_0 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address cvl_0_0 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_cvl_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_cvl_0_0 -l -1 -o 1 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.032 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:50.290 Nvme_cvl_0_0n1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn cvl_0_1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address cvl_0_1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_cvl_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_cvl_0_1 -l -1 -o 1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:50.290 Nvme_cvl_0_1n1 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=4185401 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:26:50.290 10:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0/infiniband 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=rocep175s0f0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep rocep175s0f0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.559 rocep175s0f0 00:26:55.559 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:26:55.560 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic cvl_0_0 00:26:55.560 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=cvl_0_0 00:26:55.560 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:26:55.560 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir cvl_0_0 00:26:55.560 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:26:55.560 10:54:24 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:26:55.560 [2024-06-10 10:54:24.330010] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:55.560 [2024-06-10 10:54:24.331060] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:55.560 [2024-06-10 10:54:24.334675] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device rocep175s0f0 is being removed. 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f0 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f0 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep rocep175s0f0 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:56.126 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:56.384 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:56.384 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.384 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:56.384 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.384 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:26:56.384 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:26:56.384 10:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:26:56.950 [2024-06-10 10:54:25.723932] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device irdma0(0x12f4880/0x11cd970) succeed. 00:26:56.950 [2024-06-10 10:54:25.724060] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:26:56.950 [2024-06-10 10:54:25.724079] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: port active 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0/net 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=cvl_0_0 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z cvl_0_0 ]] 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ cvl_0_0 != \c\v\l\_\0\_\0 ]] 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z cvl_0_0 ]] 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set cvl_0_0 up 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address cvl_0_0 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev cvl_0_0 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:26:57.209 [2024-06-10 10:54:26.189291] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:57.209 [2024-06-10 10:54:26.189322] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:26:57.209 [2024-06-10 10:54:26.189334] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=cvl_0_1 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name cvl_0_1 00:26:57.209 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1/infiniband 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=rocep175s0f1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep rocep175s0f1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.469 rocep175s0f1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:26:57.469 10:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:26:57.469 [2024-06-10 10:54:26.333177] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device rocep175s0f1 is being removed. 00:26:57.469 [2024-06-10 10:54:26.333407] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:26:57.469 [2024-06-10 10:54:26.333722] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:26:57.469 [2024-06-10 10:54:26.333746] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 127 00:26:57.469 [2024-06-10 10:54:26.333751] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:26:57.469 [2024-06-10 10:54:26.333757] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333762] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333768] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333774] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333779] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333784] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333789] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333794] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333800] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333804] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333811] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333816] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333822] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333826] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333831] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333836] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333844] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333849] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333854] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333859] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333865] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:57.469 [2024-06-10 10:54:26.333871] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:57.469 [2024-06-10 10:54:26.333877] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333882] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333887] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333893] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333899] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333903] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333908] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333914] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333919] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333924] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333929] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333935] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.469 [2024-06-10 10:54:26.333941] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.469 [2024-06-10 10:54:26.333945] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.333950] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.333963] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.333968] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.333972] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.333978] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.333985] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.333990] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.333995] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334001] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334005] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334010] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334015] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334020] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334024] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334029] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334034] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334039] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334044] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334049] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334053] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334058] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334063] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334071] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334075] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334081] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334087] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334092] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334096] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334101] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334106] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334111] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334116] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334121] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334126] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334131] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334137] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334142] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334147] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334152] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334156] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334161] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334166] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334170] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334175] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334180] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334185] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334190] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334196] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334201] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334206] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334211] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334215] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334219] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334225] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334230] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334235] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334241] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334246] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334251] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334255] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334260] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334265] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334270] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334275] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334282] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334286] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334292] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:57.470 [2024-06-10 10:54:26.334297] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:57.470 [2024-06-10 10:54:26.334302] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334308] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334312] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334319] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334324] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:57.470 [2024-06-10 10:54:26.334329] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:57.470 [2024-06-10 10:54:26.334333] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334338] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334343] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334347] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334352] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334356] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334361] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334367] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334373] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334377] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334383] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334388] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334393] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334399] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334404] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334409] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334415] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334420] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334426] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334431] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334437] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334443] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334448] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334454] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334463] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334471] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334477] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:57.470 [2024-06-10 10:54:26.334482] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:57.470 [2024-06-10 10:54:26.334488] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334493] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334498] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.470 [2024-06-10 10:54:26.334506] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.470 [2024-06-10 10:54:26.334511] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334518] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334524] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334529] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334535] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334540] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334545] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334551] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334556] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334562] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334568] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334573] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334579] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334585] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334590] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334596] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334601] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334607] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334612] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334617] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334625] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334631] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334636] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334642] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334648] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334652] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334658] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334663] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334669] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334674] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334679] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334684] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334690] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334695] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334700] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334705] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334712] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334717] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334723] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334728] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334734] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334742] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334748] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334754] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334760] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334766] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334772] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334777] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334783] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334788] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334793] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334798] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334804] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334809] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334816] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334822] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334827] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334833] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334838] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334843] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334849] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334853] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334859] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334865] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334870] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334875] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334881] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:57.471 [2024-06-10 10:54:26.334885] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:57.471 [2024-06-10 10:54:26.334890] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334896] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334901] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334907] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334914] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334919] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334928] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334934] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334939] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334944] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334950] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334961] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334968] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334973] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334979] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334986] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.334992] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.334997] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335002] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335007] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335013] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335019] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335024] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335031] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335037] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335041] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335047] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335053] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335058] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335063] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335070] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335075] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335082] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335087] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335093] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335098] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.471 [2024-06-10 10:54:26.335103] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.471 [2024-06-10 10:54:26.335109] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.472 [2024-06-10 10:54:26.335116] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.472 [2024-06-10 10:54:26.335121] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:57.472 [2024-06-10 10:54:26.335127] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:57.472 [2024-06-10 10:54:26.335132] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:57.472 [2024-06-10 10:54:26.335138] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:57.472 [2024-06-10 10:54:26.335143] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:58.038 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:26:58.038 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:26:58.038 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f1 00:26:58.038 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f1 00:26:58.039 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:58.039 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.039 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:58.039 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:58.039 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep rocep175s0f1 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:26:58.297 10:54:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:26:58.865 [2024-06-10 10:54:27.644399] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device irdma0(0x11cddb0/0x142fc80) succeed. 00:26:58.865 [2024-06-10 10:54:27.644464] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1/net 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=cvl_0_1 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z cvl_0_1 ]] 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ cvl_0_1 != \c\v\l\_\0\_\1 ]] 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z cvl_0_1 ]] 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set cvl_0_1 up 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address cvl_0_1 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev cvl_0_1 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:59.125 [2024-06-10 10:54:28.106101] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:59.125 [2024-06-10 10:54:28.106132] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:26:59.125 [2024-06-10 10:54:28.106152] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:26:59.125 10:54:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 4185401 00:28:20.566 0 00:28:20.566 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@118 -- # killprocess 4185163 00:28:20.566 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 4185163 ']' 00:28:20.566 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 4185163 00:28:20.566 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:28:20.566 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:20.566 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4185163 00:28:20.831 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:28:20.831 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:28:20.831 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4185163' 00:28:20.831 killing process with pid 4185163 00:28:20.831 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 4185163 00:28:20.831 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 4185163 00:28:20.831 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@119 -- # bdevperf_pid= 00:28:20.831 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/try.txt 00:28:20.831 [2024-06-10 10:54:18.213035] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:20.831 [2024-06-10 10:54:18.213081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4185163 ] 00:28:20.831 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.831 [2024-06-10 10:54:18.267471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.831 [2024-06-10 10:54:18.344731] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.831 Running I/O for 90 seconds... 00:28:20.831 [2024-06-10 10:54:24.331072] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:20.831 [2024-06-10 10:54:24.332014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:179048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:179056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:179064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:179072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:179080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:179088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:179096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:179104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:179112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:179120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.831 [2024-06-10 10:54:24.332194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:179128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.831 [2024-06-10 10:54:24.332201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:179136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-06-10 10:54:24.332216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:179144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-06-10 10:54:24.332232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:179152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-06-10 10:54:24.332246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:179160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-06-10 10:54:24.332262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:179168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-06-10 10:54:24.332277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:179176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-06-10 10:54:24.332292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:179184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-06-10 10:54:24.332307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:179192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.832 [2024-06-10 10:54:24.332321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:178176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:178184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:178192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:178200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:178208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:178216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:178224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:178232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:178240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:178248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:178256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:178264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:178272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:178280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:178288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:178296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:178304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:178312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:178320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:178328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:178336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:178344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:178352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:178360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:178368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:178376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:178384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:178392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x7a35dfe5 00:28:20.832 [2024-06-10 10:54:24.332757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.832 [2024-06-10 10:54:24.332765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:178400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:178408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:178416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:178424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:178432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:178440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:178448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:178456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:178464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:178472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:178480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:178488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:178496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:178504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:178512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.332991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.332999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:178520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:178528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:178536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:178544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:178552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:178560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:178568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:178576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:178584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:178592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:178600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:178608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:178616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:178624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:178632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:178640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:178648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:178656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:178664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:178672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:178680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.833 [2024-06-10 10:54:24.333318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:178688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x7a35dfe5 00:28:20.833 [2024-06-10 10:54:24.333327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:178696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:178704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:178712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:178720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:178728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:178736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:178744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:178752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:178760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:178768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:178776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:178784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:178792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:178800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:178808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:178816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:178824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:178832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:178840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:178848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:178856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:178864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:178872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:178880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:178888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:178896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:178904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:178912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:178920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:178928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:178936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:178944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:178952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:178960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:178968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.834 [2024-06-10 10:54:24.333880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:178976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x7a35dfe5 00:28:20.834 [2024-06-10 10:54:24.333887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.333895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:178984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x7a35dfe5 00:28:20.835 [2024-06-10 10:54:24.333902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.333910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:178992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x7a35dfe5 00:28:20.835 [2024-06-10 10:54:24.333917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.333926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:179000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x7a35dfe5 00:28:20.835 [2024-06-10 10:54:24.333933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.341735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:179008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x7a35dfe5 00:28:20.835 [2024-06-10 10:54:24.341745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.341754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:179016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x7a35dfe5 00:28:20.835 [2024-06-10 10:54:24.341762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.341771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:179024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x7a35dfe5 00:28:20.835 [2024-06-10 10:54:24.341778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.341787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:179032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x7a35dfe5 00:28:20.835 [2024-06-10 10:54:24.341793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.341886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:20.835 [2024-06-10 10:54:24.341894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:20.835 [2024-06-10 10:54:24.341901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:179040 len:8 PRP1 0x0 PRP2 0x0 00:28:20.835 [2024-06-10 10:54:24.341908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.341943] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192d7640 was disconnected and freed. reset controller. 00:28:20.835 [2024-06-10 10:54:24.343495] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:20.835 [2024-06-10 10:54:24.343514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.835 [2024-06-10 10:54:24.343522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:16 sqhd:73b9 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.343530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.835 [2024-06-10 10:54:24.343536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:16 sqhd:73b9 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.343544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.835 [2024-06-10 10:54:24.343551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:16 sqhd:73b9 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.343558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.835 [2024-06-10 10:54:24.343564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:16 sqhd:73b9 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:24.356278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:20.835 [2024-06-10 10:54:24.356293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:28:20.835 [2024-06-10 10:54:24.356301] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:20.835 [2024-06-10 10:54:24.356493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:28:20.835 [2024-06-10 10:54:24.356608] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:28:20.835 [2024-06-10 10:54:24.356619] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:20.835 [2024-06-10 10:54:24.356625] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:20.835 [2024-06-10 10:54:24.356639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:20.835 [2024-06-10 10:54:24.356647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:28:20.835 [2024-06-10 10:54:24.356657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] Ctrlr is in error state 00:28:20.835 [2024-06-10 10:54:24.356663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] controller reinitialization failed 00:28:20.835 [2024-06-10 10:54:24.356670] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] already in failed state 00:28:20.835 [2024-06-10 10:54:24.356685] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.835 [2024-06-10 10:54:24.356693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:28:20.835 [2024-06-10 10:54:26.332417] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:28:20.835 [2024-06-10 10:54:26.332449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.835 [2024-06-10 10:54:26.332459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:6 sqhd:73b9 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.332467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.835 [2024-06-10 10:54:26.332475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:6 sqhd:73b9 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.332482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.835 [2024-06-10 10:54:26.332489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:6 sqhd:73b9 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.332496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.835 [2024-06-10 10:54:26.332502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32652 cdw0:6 sqhd:73b9 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.333498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:20.835 [2024-06-10 10:54:26.333513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:28:20.835 [2024-06-10 10:54:26.333609] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:28:20.835 [2024-06-10 10:54:26.334362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:195536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0xe2565403 00:28:20.835 [2024-06-10 10:54:26.334373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.334392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:195544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0xe2565403 00:28:20.835 [2024-06-10 10:54:26.334401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.334410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:195552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0xe2565403 00:28:20.835 [2024-06-10 10:54:26.334418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.334427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:195560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0xe2565403 00:28:20.835 [2024-06-10 10:54:26.334435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.334444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:195568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0xe2565403 00:28:20.835 [2024-06-10 10:54:26.334452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.334461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:195576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0xe2565403 00:28:20.835 [2024-06-10 10:54:26.334469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.334478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:195584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.835 [2024-06-10 10:54:26.334485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.835 [2024-06-10 10:54:26.334494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:195592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.835 [2024-06-10 10:54:26.334501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:195600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:195608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:195616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:195624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:195632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:195640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:195648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:195656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:195664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:195672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:195680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:195688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:195696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:195704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:195712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:195720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:195728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:195736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:195744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:195752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:195760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:195768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:195776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:195784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:195792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:195800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:195808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:195816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.334990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.334998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:195824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.335015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:195832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.335038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:195840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.335055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:195848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.335075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:195856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.335092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:195864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.335108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:195872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.335126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:195880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.836 [2024-06-10 10:54:26.335141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:195888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.836 [2024-06-10 10:54:26.335149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:195896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:195904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:195912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:195920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:195928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:195936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:195944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:195952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:195960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:195968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:195976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:195984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:195992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:196000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:196008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:196016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:196024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:196032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:196040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:196048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:196056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:196064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:196072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:196080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:196088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:196096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:196104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:196112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:196120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:196128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:196136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:196144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:196152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:196160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:196168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:196176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:196184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:196192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.837 [2024-06-10 10:54:26.335726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:196200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.837 [2024-06-10 10:54:26.335734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:196208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:196216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:196224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:196232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:196240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:196248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:196256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:196264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:196272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:196280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:196288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:196296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:196304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:196312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:196320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:196328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:196336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.335990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.335998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:196344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:196352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:196360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:196368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:196376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:196384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:196392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:196400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:196408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:196416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:196424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:196432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:196440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:196448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:196456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:196464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:196472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:196480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:196488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:196496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:196504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.838 [2024-06-10 10:54:26.336305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:196512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.838 [2024-06-10 10:54:26.336314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.839 [2024-06-10 10:54:26.336322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:196520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.839 [2024-06-10 10:54:26.336329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.839 [2024-06-10 10:54:26.336337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:196528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.839 [2024-06-10 10:54:26.336343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.839 [2024-06-10 10:54:26.336352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:196536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.839 [2024-06-10 10:54:26.336359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.839 [2024-06-10 10:54:26.336367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:196544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.839 [2024-06-10 10:54:26.336373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32652 cdw0:772af860 sqhd:a540 p:0 m:0 dnr:0 00:28:20.839 [2024-06-10 10:54:26.349082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:20.839 [2024-06-10 10:54:26.349095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:20.839 [2024-06-10 10:54:26.349101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:196552 len:8 PRP1 0x0 PRP2 0x0 00:28:20.839 [2024-06-10 10:54:26.349109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.839 [2024-06-10 10:54:26.349150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:28:20.839 [2024-06-10 10:54:26.349441] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:28:20.839 [2024-06-10 10:54:26.349452] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:20.839 [2024-06-10 10:54:26.349459] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:20.839 [2024-06-10 10:54:26.349473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:20.839 [2024-06-10 10:54:26.349480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:28:20.839 [2024-06-10 10:54:26.349494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] Ctrlr is in error state 00:28:20.839 [2024-06-10 10:54:26.349501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] controller reinitialization failed 00:28:20.839 [2024-06-10 10:54:26.349509] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] already in failed state 00:28:20.839 [2024-06-10 10:54:26.349527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.839 [2024-06-10 10:54:26.349534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:28:20.839 [2024-06-10 10:54:26.363298] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:20.839 [2024-06-10 10:54:26.363310] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:20.839 [2024-06-10 10:54:26.363324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:20.839 [2024-06-10 10:54:26.363331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:28:20.839 [2024-06-10 10:54:26.363342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] Ctrlr is in error state 00:28:20.839 [2024-06-10 10:54:26.363349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] controller reinitialization failed 00:28:20.839 [2024-06-10 10:54:26.363355] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] already in failed state 00:28:20.839 [2024-06-10 10:54:26.363370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.839 [2024-06-10 10:54:26.363377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:28:20.839 [2024-06-10 10:54:27.414509] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:20.839 [2024-06-10 10:54:28.354547] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:20.839 [2024-06-10 10:54:28.354573] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:20.839 [2024-06-10 10:54:28.354594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:20.839 [2024-06-10 10:54:28.354602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:28:20.839 [2024-06-10 10:54:28.354613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] Ctrlr is in error state 00:28:20.839 [2024-06-10 10:54:28.354619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] controller reinitialization failed 00:28:20.839 [2024-06-10 10:54:28.354627] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] already in failed state 00:28:20.839 [2024-06-10 10:54:28.354644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.839 [2024-06-10 10:54:28.354652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:28:20.839 [2024-06-10 10:54:29.412820] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:20.839 00:28:20.839 Latency(us) 00:28:20.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.839 Job: Nvme_cvl_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:20.839 Verification LBA range: start 0x0 length 0x8000 00:28:20.839 Nvme_cvl_0_0n1 : 90.01 10805.99 42.21 0.00 0.00 11822.17 2153.33 4042510.14 00:28:20.839 Job: Nvme_cvl_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:20.839 Verification LBA range: start 0x0 length 0x8000 00:28:20.839 Nvme_cvl_0_1n1 : 90.01 10773.99 42.09 0.00 0.00 11858.36 2356.18 4042510.14 00:28:20.839 =================================================================================================================== 00:28:20.839 Total : 21579.98 84.30 0.00 0.00 11840.24 2153.33 4042510.14 00:28:20.839 Received shutdown signal, test time was about 90.000000 seconds 00:28:20.839 00:28:20.839 Latency(us) 00:28:20.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.839 =================================================================================================================== 00:28:20.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/try.txt 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@202 -- # killprocess 4184923 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 4184923 ']' 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 4184923 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4184923 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4184923' 00:28:20.839 killing process with pid 4184923 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 4184923 00:28:20.839 10:55:49 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 4184923 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@203 -- # nvmfpid= 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@205 -- # return 0 00:28:21.408 00:28:21.408 real 1m33.079s 00:28:21.408 user 4m37.073s 00:28:21.408 sys 0m1.679s 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:21.408 ************************************ 00:28:21.408 END TEST nvmf_device_removal_pci_remove 00:28:21.408 ************************************ 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@317 -- # nvmftestfini 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@117 -- # sync 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@120 -- # set +e 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:21.408 rmmod nvme_rdma 00:28:21.408 rmmod nvme_fabrics 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@124 -- # set -e 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@125 -- # return 0 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@318 -- # clean_bond_device 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # ip link 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # grep bond_nvmf 00:28:21.408 00:28:21.408 real 3m12.560s 00:28:21.408 user 9m16.177s 00:28:21.408 sys 0m8.003s 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:21.408 10:55:50 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:28:21.408 ************************************ 00:28:21.408 END TEST nvmf_device_removal 00:28:21.408 ************************************ 00:28:21.408 10:55:50 nvmf_rdma -- nvmf/nvmf.sh@80 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:28:21.408 10:55:50 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:21.409 10:55:50 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:21.409 10:55:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:21.409 ************************************ 00:28:21.409 START TEST nvmf_srq_overwhelm 00:28:21.409 ************************************ 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:28:21.409 * Looking for test storage... 00:28:21.409 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:28:21.409 10:55:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:26.683 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:26.683 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # modinfo irdma 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:26.683 Found net devices under 0000:af:00.0: cvl_0_0 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:26.683 Found net devices under 0000:af:00.1: cvl_0_1 00:28:26.683 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:28:26.684 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:26.684 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:28:26.684 altname enp175s0f0np0 00:28:26.684 altname ens801f0np0 00:28:26.684 inet 192.168.100.8/24 scope global cvl_0_0 00:28:26.684 valid_lft forever preferred_lft forever 00:28:26.684 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:28:26.684 valid_lft forever preferred_lft forever 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:28:26.684 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:26.684 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:28:26.684 altname enp175s0f1np1 00:28:26.684 altname ens801f1np1 00:28:26.684 inet 192.168.100.9/24 scope global cvl_0_1 00:28:26.684 valid_lft forever preferred_lft forever 00:28:26.684 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:28:26.684 valid_lft forever preferred_lft forever 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:26.684 192.168.100.9' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:26.684 192.168.100.9' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:26.684 192.168.100.9' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=10046 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 10046 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@830 -- # '[' -z 10046 ']' 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.684 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:26.685 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.685 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:26.685 10:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:26.943 [2024-06-10 10:55:55.732887] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:26.943 [2024-06-10 10:55:55.732930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.943 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.943 [2024-06-10 10:55:55.791538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.943 [2024-06-10 10:55:55.869679] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.943 [2024-06-10 10:55:55.869718] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.943 [2024-06-10 10:55:55.869724] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.943 [2024-06-10 10:55:55.869730] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.943 [2024-06-10 10:55:55.869735] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.943 [2024-06-10 10:55:55.869773] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.943 [2024-06-10 10:55:55.869874] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.943 [2024-06-10 10:55:55.869976] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.943 [2024-06-10 10:55:55.869980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@863 -- # return 0 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:27.911 [2024-06-10 10:55:56.612222] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x19418f0/0x1940f30) succeed. 00:28:27.911 [2024-06-10 10:55:56.621124] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1942ca0/0x19414b0) succeed. 00:28:27.911 [2024-06-10 10:55:56.621147] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:27.911 Malloc0 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:27.911 [2024-06-10 10:55:56.680157] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme0n1 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme0n1 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.911 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.170 Malloc1 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.170 10:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:28:28.170 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:28:28.170 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:28:28.170 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:28:28.170 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme1n1 00:28:28.170 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:28:28.170 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme1n1 00:28:28.170 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.429 Malloc2 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:28:28.429 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme2n1 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme2n1 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.688 Malloc3 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.688 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme3n1 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme3n1 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.947 Malloc4 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.947 10:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme4n1 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme4n1 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:29.206 Malloc5 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:29.206 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:28:29.465 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:28:29.465 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:28:29.465 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:28:29.465 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme5n1 00:28:29.465 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:28:29.465 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme5n1 00:28:29.465 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:28:29.465 10:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:28:29.465 [global] 00:28:29.465 thread=1 00:28:29.465 invalidate=1 00:28:29.465 rw=read 00:28:29.465 time_based=1 00:28:29.465 runtime=10 00:28:29.465 ioengine=libaio 00:28:29.465 direct=1 00:28:29.465 bs=1048576 00:28:29.465 iodepth=128 00:28:29.465 norandommap=1 00:28:29.465 numjobs=13 00:28:29.465 00:28:29.465 [job0] 00:28:29.465 filename=/dev/nvme0n1 00:28:29.465 [job1] 00:28:29.465 filename=/dev/nvme2n1 00:28:29.465 [job2] 00:28:29.465 filename=/dev/nvme3n1 00:28:29.465 [job3] 00:28:29.465 filename=/dev/nvme4n1 00:28:29.465 [job4] 00:28:29.465 filename=/dev/nvme5n1 00:28:29.465 [job5] 00:28:29.465 filename=/dev/nvme6n1 00:28:29.465 Could not set queue depth (nvme0n1) 00:28:29.465 Could not set queue depth (nvme2n1) 00:28:29.465 Could not set queue depth (nvme3n1) 00:28:29.465 Could not set queue depth (nvme4n1) 00:28:29.465 Could not set queue depth (nvme5n1) 00:28:29.465 Could not set queue depth (nvme6n1) 00:28:29.724 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:29.724 ... 00:28:29.724 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:29.724 ... 00:28:29.724 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:29.724 ... 00:28:29.724 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:29.724 ... 00:28:29.724 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:29.724 ... 00:28:29.724 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:28:29.724 ... 00:28:29.724 fio-3.35 00:28:29.724 Starting 78 threads 00:28:41.936 00:28:41.936 job0: (groupid=0, jobs=1): err= 0: pid=10751: Mon Jun 10 10:56:09 2024 00:28:41.936 read: IOPS=62, BW=62.1MiB/s (65.2MB/s)(624MiB/10042msec) 00:28:41.936 slat (usec): min=40, max=106891, avg=16027.82, stdev=16972.15 00:28:41.936 clat (msec): min=38, max=3088, avg=1822.45, stdev=869.57 00:28:41.936 lat (msec): min=73, max=3114, avg=1838.48, stdev=873.55 00:28:41.936 clat percentiles (msec): 00:28:41.936 | 1.00th=[ 105], 5.00th=[ 430], 10.00th=[ 718], 20.00th=[ 953], 00:28:41.936 | 30.00th=[ 1183], 40.00th=[ 1418], 50.00th=[ 1754], 60.00th=[ 2333], 00:28:41.936 | 70.00th=[ 2567], 80.00th=[ 2802], 90.00th=[ 2903], 95.00th=[ 3037], 00:28:41.936 | 99.00th=[ 3071], 99.50th=[ 3071], 99.90th=[ 3104], 99.95th=[ 3104], 00:28:41.936 | 99.99th=[ 3104] 00:28:41.936 bw ( KiB/s): min=24576, max=169984, per=1.26%, avg=63078.40, stdev=38237.87, samples=15 00:28:41.937 iops : min= 24, max= 166, avg=61.60, stdev=37.34, samples=15 00:28:41.937 lat (msec) : 50=0.16%, 100=0.80%, 250=1.76%, 500=2.56%, 750=5.77% 00:28:41.937 lat (msec) : 1000=10.74%, 2000=35.42%, >=2000=42.79% 00:28:41.937 cpu : usr=0.00%, sys=0.94%, ctx=1795, majf=0, minf=32769 00:28:41.937 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:28:41.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.937 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.937 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.937 job0: (groupid=0, jobs=1): err= 0: pid=10752: Mon Jun 10 10:56:09 2024 00:28:41.937 read: IOPS=55, BW=55.7MiB/s (58.4MB/s)(563MiB/10116msec) 00:28:41.937 slat (usec): min=45, max=103866, avg=17815.36, stdev=16203.90 00:28:41.937 clat (msec): min=84, max=3010, avg=2132.89, stdev=549.18 00:28:41.937 lat (msec): min=145, max=3031, avg=2150.70, stdev=547.64 00:28:41.937 clat percentiles (msec): 00:28:41.937 | 1.00th=[ 266], 5.00th=[ 1133], 10.00th=[ 1552], 20.00th=[ 1888], 00:28:41.937 | 30.00th=[ 1921], 40.00th=[ 1972], 50.00th=[ 2056], 60.00th=[ 2106], 00:28:41.937 | 70.00th=[ 2467], 80.00th=[ 2735], 90.00th=[ 2836], 95.00th=[ 2903], 00:28:41.937 | 99.00th=[ 3004], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3004], 00:28:41.937 | 99.99th=[ 3004] 00:28:41.937 bw ( KiB/s): min=22528, max=92160, per=1.04%, avg=52404.71, stdev=17318.30, samples=17 00:28:41.937 iops : min= 22, max= 90, avg=51.18, stdev=16.91, samples=17 00:28:41.937 lat (msec) : 100=0.18%, 250=0.71%, 500=0.71%, 750=1.24%, 1000=1.07% 00:28:41.937 lat (msec) : 2000=41.03%, >=2000=55.06% 00:28:41.937 cpu : usr=0.03%, sys=1.22%, ctx=1796, majf=0, minf=32769 00:28:41.937 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:28:41.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.937 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.937 issued rwts: total=563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.937 job0: (groupid=0, jobs=1): err= 0: pid=10753: Mon Jun 10 10:56:09 2024 00:28:41.937 read: IOPS=56, BW=56.7MiB/s (59.4MB/s)(571MiB/10072msec) 00:28:41.937 slat (usec): min=34, max=104499, avg=17538.36, stdev=17920.39 00:28:41.937 clat (msec): min=55, max=2878, avg=1869.75, stdev=763.24 00:28:41.937 lat (msec): min=96, max=2915, avg=1887.29, stdev=767.65 00:28:41.937 clat percentiles (msec): 00:28:41.937 | 1.00th=[ 110], 5.00th=[ 268], 10.00th=[ 542], 20.00th=[ 1062], 00:28:41.937 | 30.00th=[ 1687], 40.00th=[ 2123], 50.00th=[ 2265], 60.00th=[ 2333], 00:28:41.937 | 70.00th=[ 2366], 80.00th=[ 2433], 90.00th=[ 2467], 95.00th=[ 2534], 00:28:41.937 | 99.00th=[ 2836], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:28:41.937 | 99.99th=[ 2869] 00:28:41.937 bw ( KiB/s): min=16384, max=110592, per=1.12%, avg=56173.71, stdev=23012.79, samples=14 00:28:41.937 iops : min= 16, max= 108, avg=54.86, stdev=22.47, samples=14 00:28:41.937 lat (msec) : 100=0.35%, 250=4.38%, 500=4.38%, 750=5.25%, 1000=4.90% 00:28:41.937 lat (msec) : 2000=14.71%, >=2000=66.02% 00:28:41.937 cpu : usr=0.00%, sys=0.93%, ctx=1738, majf=0, minf=32769 00:28:41.937 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:28:41.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.937 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.937 issued rwts: total=571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.937 job0: (groupid=0, jobs=1): err= 0: pid=10754: Mon Jun 10 10:56:09 2024 00:28:41.937 read: IOPS=97, BW=97.4MiB/s (102MB/s)(979MiB/10054msec) 00:28:41.937 slat (usec): min=39, max=99899, avg=10213.56, stdev=19670.68 00:28:41.937 clat (msec): min=52, max=3357, avg=1153.09, stdev=721.72 00:28:41.937 lat (msec): min=64, max=3359, avg=1163.31, stdev=723.30 00:28:41.937 clat percentiles (msec): 00:28:41.937 | 1.00th=[ 284], 5.00th=[ 726], 10.00th=[ 751], 20.00th=[ 802], 00:28:41.937 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 860], 60.00th=[ 902], 00:28:41.937 | 70.00th=[ 911], 80.00th=[ 1099], 90.00th=[ 2635], 95.00th=[ 3138], 00:28:41.937 | 99.00th=[ 3306], 99.50th=[ 3339], 99.90th=[ 3373], 99.95th=[ 3373], 00:28:41.937 | 99.99th=[ 3373] 00:28:41.937 bw ( KiB/s): min=14336, max=186368, per=2.04%, avg=102582.88, stdev=65782.17, samples=17 00:28:41.937 iops : min= 14, max= 182, avg=100.12, stdev=64.23, samples=17 00:28:41.937 lat (msec) : 100=0.31%, 250=0.61%, 500=0.41%, 750=7.25%, 1000=69.97% 00:28:41.937 lat (msec) : 2000=9.19%, >=2000=12.26% 00:28:41.937 cpu : usr=0.03%, sys=1.13%, ctx=1641, majf=0, minf=32769 00:28:41.937 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:28:41.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.937 issued rwts: total=979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.937 job0: (groupid=0, jobs=1): err= 0: pid=10755: Mon Jun 10 10:56:09 2024 00:28:41.937 read: IOPS=59, BW=59.2MiB/s (62.1MB/s)(596MiB/10069msec) 00:28:41.937 slat (usec): min=34, max=91353, avg=16802.52, stdev=15565.56 00:28:41.937 clat (msec): min=52, max=3071, avg=1993.37, stdev=868.13 00:28:41.937 lat (msec): min=95, max=3101, avg=2010.17, stdev=871.18 00:28:41.937 clat percentiles (msec): 00:28:41.937 | 1.00th=[ 126], 5.00th=[ 609], 10.00th=[ 835], 20.00th=[ 936], 00:28:41.937 | 30.00th=[ 1217], 40.00th=[ 1821], 50.00th=[ 2467], 60.00th=[ 2635], 00:28:41.937 | 70.00th=[ 2702], 80.00th=[ 2735], 90.00th=[ 2836], 95.00th=[ 2970], 00:28:41.937 | 99.00th=[ 3037], 99.50th=[ 3071], 99.90th=[ 3071], 99.95th=[ 3071], 00:28:41.937 | 99.99th=[ 3071] 00:28:41.937 bw ( KiB/s): min=28672, max=106496, per=1.06%, avg=53248.00, stdev=20390.20, samples=17 00:28:41.937 iops : min= 28, max= 104, avg=52.00, stdev=19.91, samples=17 00:28:41.937 lat (msec) : 100=0.34%, 250=1.51%, 500=2.01%, 750=2.68%, 1000=17.62% 00:28:41.937 lat (msec) : 2000=18.12%, >=2000=57.72% 00:28:41.937 cpu : usr=0.00%, sys=1.20%, ctx=1672, majf=0, minf=32769 00:28:41.937 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:28:41.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.937 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.937 issued rwts: total=596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.937 job0: (groupid=0, jobs=1): err= 0: pid=10756: Mon Jun 10 10:56:09 2024 00:28:41.937 read: IOPS=73, BW=73.3MiB/s (76.9MB/s)(737MiB/10053msec) 00:28:41.937 slat (usec): min=37, max=120117, avg=13565.92, stdev=19499.27 00:28:41.937 clat (msec): min=52, max=2513, avg=1571.58, stdev=516.16 00:28:41.937 lat (msec): min=59, max=2537, avg=1585.14, stdev=515.88 00:28:41.937 clat percentiles (msec): 00:28:41.937 | 1.00th=[ 209], 5.00th=[ 609], 10.00th=[ 1011], 20.00th=[ 1150], 00:28:41.937 | 30.00th=[ 1334], 40.00th=[ 1435], 50.00th=[ 1552], 60.00th=[ 1687], 00:28:41.937 | 70.00th=[ 1838], 80.00th=[ 2022], 90.00th=[ 2333], 95.00th=[ 2433], 00:28:41.937 | 99.00th=[ 2467], 99.50th=[ 2500], 99.90th=[ 2500], 99.95th=[ 2500], 00:28:41.937 | 99.99th=[ 2500] 00:28:41.937 bw ( KiB/s): min=47104, max=143360, per=1.48%, avg=74112.00, stdev=32383.61, samples=16 00:28:41.937 iops : min= 46, max= 140, avg=72.37, stdev=31.62, samples=16 00:28:41.937 lat (msec) : 100=0.27%, 250=1.09%, 500=2.71%, 750=2.04%, 1000=3.12% 00:28:41.937 lat (msec) : 2000=70.42%, >=2000=20.35% 00:28:41.937 cpu : usr=0.03%, sys=0.97%, ctx=1698, majf=0, minf=32769 00:28:41.937 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:28:41.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.937 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.937 issued rwts: total=737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.937 job0: (groupid=0, jobs=1): err= 0: pid=10757: Mon Jun 10 10:56:09 2024 00:28:41.937 read: IOPS=53, BW=53.5MiB/s (56.1MB/s)(540MiB/10086msec) 00:28:41.937 slat (usec): min=57, max=103610, avg=18517.09, stdev=17431.16 00:28:41.937 clat (msec): min=84, max=2962, avg=2243.38, stdev=694.60 00:28:41.937 lat (msec): min=102, max=2966, avg=2261.90, stdev=695.92 00:28:41.937 clat percentiles (msec): 00:28:41.937 | 1.00th=[ 174], 5.00th=[ 535], 10.00th=[ 1003], 20.00th=[ 1871], 00:28:41.937 | 30.00th=[ 2333], 40.00th=[ 2400], 50.00th=[ 2500], 60.00th=[ 2567], 00:28:41.937 | 70.00th=[ 2635], 80.00th=[ 2735], 90.00th=[ 2836], 95.00th=[ 2869], 00:28:41.937 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:28:41.937 | 99.99th=[ 2970] 00:28:41.937 bw ( KiB/s): min=30720, max=65536, per=0.99%, avg=49754.35, stdev=9895.38, samples=17 00:28:41.937 iops : min= 30, max= 64, avg=48.59, stdev= 9.66, samples=17 00:28:41.937 lat (msec) : 100=0.19%, 250=1.85%, 500=2.22%, 750=2.59%, 1000=2.96% 00:28:41.937 lat (msec) : 2000=12.96%, >=2000=77.22% 00:28:41.937 cpu : usr=0.03%, sys=1.34%, ctx=1753, majf=0, minf=32769 00:28:41.937 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:28:41.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.937 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.937 issued rwts: total=540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.937 job0: (groupid=0, jobs=1): err= 0: pid=10758: Mon Jun 10 10:56:09 2024 00:28:41.937 read: IOPS=54, BW=54.1MiB/s (56.8MB/s)(548MiB/10125msec) 00:28:41.937 slat (usec): min=39, max=139297, avg=18302.66, stdev=20711.10 00:28:41.937 clat (msec): min=93, max=4032, avg=1972.17, stdev=863.45 00:28:41.937 lat (msec): min=143, max=4046, avg=1990.47, stdev=866.72 00:28:41.937 clat percentiles (msec): 00:28:41.937 | 1.00th=[ 243], 5.00th=[ 617], 10.00th=[ 885], 20.00th=[ 1267], 00:28:41.937 | 30.00th=[ 1536], 40.00th=[ 1703], 50.00th=[ 1921], 60.00th=[ 2198], 00:28:41.937 | 70.00th=[ 2366], 80.00th=[ 2500], 90.00th=[ 3306], 95.00th=[ 3809], 00:28:41.937 | 99.00th=[ 3943], 99.50th=[ 3977], 99.90th=[ 4044], 99.95th=[ 4044], 00:28:41.937 | 99.99th=[ 4044] 00:28:41.937 bw ( KiB/s): min= 4096, max=178176, per=1.22%, avg=61429.14, stdev=40591.94, samples=14 00:28:41.937 iops : min= 4, max= 174, avg=59.93, stdev=39.62, samples=14 00:28:41.937 lat (msec) : 100=0.18%, 250=0.91%, 500=2.74%, 750=3.47%, 1000=6.20% 00:28:41.937 lat (msec) : 2000=39.78%, >=2000=46.72% 00:28:41.937 cpu : usr=0.01%, sys=1.10%, ctx=1825, majf=0, minf=32769 00:28:41.938 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:28:41.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.938 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.938 issued rwts: total=548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.938 job0: (groupid=0, jobs=1): err= 0: pid=10759: Mon Jun 10 10:56:09 2024 00:28:41.938 read: IOPS=57, BW=57.3MiB/s (60.0MB/s)(575MiB/10043msec) 00:28:41.938 slat (usec): min=39, max=135907, avg=17390.87, stdev=18926.09 00:28:41.938 clat (msec): min=41, max=2629, avg=1940.90, stdev=674.04 00:28:41.938 lat (msec): min=44, max=2639, avg=1958.29, stdev=676.11 00:28:41.938 clat percentiles (msec): 00:28:41.938 | 1.00th=[ 55], 5.00th=[ 372], 10.00th=[ 1045], 20.00th=[ 1401], 00:28:41.938 | 30.00th=[ 1536], 40.00th=[ 1938], 50.00th=[ 2333], 60.00th=[ 2433], 00:28:41.938 | 70.00th=[ 2467], 80.00th=[ 2500], 90.00th=[ 2534], 95.00th=[ 2534], 00:28:41.938 | 99.00th=[ 2601], 99.50th=[ 2601], 99.90th=[ 2635], 99.95th=[ 2635], 00:28:41.938 | 99.99th=[ 2635] 00:28:41.938 bw ( KiB/s): min=16384, max=131072, per=1.13%, avg=56661.33, stdev=24995.03, samples=15 00:28:41.938 iops : min= 16, max= 128, avg=55.33, stdev=24.41, samples=15 00:28:41.938 lat (msec) : 50=0.52%, 100=1.74%, 250=1.57%, 500=1.74%, 750=1.74% 00:28:41.938 lat (msec) : 1000=2.26%, 2000=31.48%, >=2000=58.96% 00:28:41.938 cpu : usr=0.01%, sys=0.93%, ctx=1783, majf=0, minf=32769 00:28:41.938 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:28:41.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.938 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.938 issued rwts: total=575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.938 job0: (groupid=0, jobs=1): err= 0: pid=10760: Mon Jun 10 10:56:09 2024 00:28:41.938 read: IOPS=65, BW=65.5MiB/s (68.7MB/s)(660MiB/10079msec) 00:28:41.938 slat (usec): min=31, max=99819, avg=15187.26, stdev=16958.25 00:28:41.938 clat (msec): min=52, max=2924, avg=1841.39, stdev=794.05 00:28:41.938 lat (msec): min=96, max=2947, avg=1856.58, stdev=797.83 00:28:41.938 clat percentiles (msec): 00:28:41.938 | 1.00th=[ 209], 5.00th=[ 676], 10.00th=[ 718], 20.00th=[ 869], 00:28:41.938 | 30.00th=[ 1200], 40.00th=[ 1687], 50.00th=[ 2198], 60.00th=[ 2400], 00:28:41.938 | 70.00th=[ 2467], 80.00th=[ 2534], 90.00th=[ 2735], 95.00th=[ 2802], 00:28:41.938 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937], 00:28:41.938 | 99.99th=[ 2937] 00:28:41.938 bw ( KiB/s): min=36864, max=143360, per=1.16%, avg=58140.44, stdev=24426.35, samples=18 00:28:41.938 iops : min= 36, max= 140, avg=56.78, stdev=23.85, samples=18 00:28:41.938 lat (msec) : 100=0.30%, 250=1.06%, 500=1.67%, 750=10.00%, 1000=11.82% 00:28:41.938 lat (msec) : 2000=20.76%, >=2000=54.39% 00:28:41.938 cpu : usr=0.02%, sys=1.15%, ctx=1745, majf=0, minf=32769 00:28:41.938 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:28:41.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.938 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.938 issued rwts: total=660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.938 job0: (groupid=0, jobs=1): err= 0: pid=10761: Mon Jun 10 10:56:09 2024 00:28:41.938 read: IOPS=51, BW=51.1MiB/s (53.6MB/s)(517MiB/10114msec) 00:28:41.938 slat (usec): min=57, max=101685, avg=19412.29, stdev=14595.30 00:28:41.938 clat (msec): min=75, max=3001, avg=2298.04, stdev=687.84 00:28:41.938 lat (msec): min=126, max=3012, avg=2317.45, stdev=688.46 00:28:41.938 clat percentiles (msec): 00:28:41.938 | 1.00th=[ 284], 5.00th=[ 625], 10.00th=[ 1133], 20.00th=[ 1955], 00:28:41.938 | 30.00th=[ 2198], 40.00th=[ 2299], 50.00th=[ 2534], 60.00th=[ 2702], 00:28:41.938 | 70.00th=[ 2735], 80.00th=[ 2836], 90.00th=[ 2903], 95.00th=[ 2937], 00:28:41.938 | 99.00th=[ 2970], 99.50th=[ 2970], 99.90th=[ 3004], 99.95th=[ 3004], 00:28:41.938 | 99.99th=[ 3004] 00:28:41.938 bw ( KiB/s): min=34816, max=71680, per=0.99%, avg=49792.00, stdev=9480.53, samples=16 00:28:41.938 iops : min= 34, max= 70, avg=48.62, stdev= 9.26, samples=16 00:28:41.938 lat (msec) : 100=0.19%, 250=0.77%, 500=2.13%, 750=3.68%, 1000=1.93% 00:28:41.938 lat (msec) : 2000=11.61%, >=2000=79.69% 00:28:41.938 cpu : usr=0.02%, sys=1.14%, ctx=1875, majf=0, minf=32769 00:28:41.938 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.8% 00:28:41.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.938 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.938 issued rwts: total=517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.938 job0: (groupid=0, jobs=1): err= 0: pid=10762: Mon Jun 10 10:56:09 2024 00:28:41.938 read: IOPS=64, BW=64.8MiB/s (68.0MB/s)(651MiB/10041msec) 00:28:41.938 slat (usec): min=35, max=122632, avg=15360.91, stdev=19550.69 00:28:41.938 clat (msec): min=38, max=2443, avg=1839.20, stdev=523.84 00:28:41.938 lat (msec): min=73, max=2445, avg=1854.56, stdev=524.22 00:28:41.938 clat percentiles (msec): 00:28:41.938 | 1.00th=[ 105], 5.00th=[ 592], 10.00th=[ 1062], 20.00th=[ 1452], 00:28:41.938 | 30.00th=[ 1888], 40.00th=[ 1989], 50.00th=[ 2039], 60.00th=[ 2106], 00:28:41.938 | 70.00th=[ 2140], 80.00th=[ 2198], 90.00th=[ 2232], 95.00th=[ 2299], 00:28:41.938 | 99.00th=[ 2366], 99.50th=[ 2400], 99.90th=[ 2433], 99.95th=[ 2433], 00:28:41.938 | 99.99th=[ 2433] 00:28:41.938 bw ( KiB/s): min=36864, max=83968, per=1.18%, avg=59392.00, stdev=11333.60, samples=17 00:28:41.938 iops : min= 36, max= 82, avg=58.00, stdev=11.07, samples=17 00:28:41.938 lat (msec) : 50=0.15%, 100=0.77%, 250=1.54%, 500=1.84%, 750=2.76% 00:28:41.938 lat (msec) : 1000=2.61%, 2000=34.25%, >=2000=56.07% 00:28:41.938 cpu : usr=0.02%, sys=1.03%, ctx=1709, majf=0, minf=32769 00:28:41.938 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:28:41.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.938 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.938 issued rwts: total=651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.938 job0: (groupid=0, jobs=1): err= 0: pid=10764: Mon Jun 10 10:56:09 2024 00:28:41.938 read: IOPS=56, BW=56.2MiB/s (58.9MB/s)(566MiB/10077msec) 00:28:41.938 slat (usec): min=55, max=107850, avg=17667.11, stdev=17806.04 00:28:41.938 clat (msec): min=74, max=2948, avg=2150.11, stdev=723.48 00:28:41.938 lat (msec): min=76, max=2975, avg=2167.78, stdev=725.57 00:28:41.938 clat percentiles (msec): 00:28:41.938 | 1.00th=[ 153], 5.00th=[ 422], 10.00th=[ 860], 20.00th=[ 1703], 00:28:41.938 | 30.00th=[ 2140], 40.00th=[ 2232], 50.00th=[ 2366], 60.00th=[ 2500], 00:28:41.938 | 70.00th=[ 2635], 80.00th=[ 2702], 90.00th=[ 2802], 95.00th=[ 2836], 00:28:41.938 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 2937], 99.95th=[ 2937], 00:28:41.938 | 99.99th=[ 2937] 00:28:41.938 bw ( KiB/s): min=24576, max=75776, per=1.05%, avg=52880.29, stdev=12896.45, samples=17 00:28:41.938 iops : min= 24, max= 74, avg=51.59, stdev=12.59, samples=17 00:28:41.938 lat (msec) : 100=0.88%, 250=1.77%, 500=3.18%, 750=2.30%, 1000=3.36% 00:28:41.938 lat (msec) : 2000=12.72%, >=2000=75.80% 00:28:41.938 cpu : usr=0.01%, sys=1.36%, ctx=1781, majf=0, minf=32769 00:28:41.938 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.9% 00:28:41.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.938 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.938 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.938 job1: (groupid=0, jobs=1): err= 0: pid=10777: Mon Jun 10 10:56:09 2024 00:28:41.938 read: IOPS=77, BW=77.6MiB/s (81.4MB/s)(782MiB/10075msec) 00:28:41.938 slat (usec): min=31, max=112065, avg=12795.00, stdev=18976.57 00:28:41.938 clat (msec): min=66, max=2956, avg=1524.27, stdev=828.97 00:28:41.938 lat (msec): min=102, max=2963, avg=1537.07, stdev=833.58 00:28:41.938 clat percentiles (msec): 00:28:41.938 | 1.00th=[ 213], 5.00th=[ 393], 10.00th=[ 667], 20.00th=[ 743], 00:28:41.938 | 30.00th=[ 885], 40.00th=[ 986], 50.00th=[ 1284], 60.00th=[ 1636], 00:28:41.938 | 70.00th=[ 2123], 80.00th=[ 2635], 90.00th=[ 2735], 95.00th=[ 2769], 00:28:41.938 | 99.00th=[ 2869], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:28:41.938 | 99.99th=[ 2970] 00:28:41.938 bw ( KiB/s): min=22528, max=172032, per=1.48%, avg=74379.61, stdev=51744.57, samples=18 00:28:41.938 iops : min= 22, max= 168, avg=72.56, stdev=50.45, samples=18 00:28:41.938 lat (msec) : 100=0.13%, 250=2.30%, 500=3.07%, 750=15.60%, 1000=19.31% 00:28:41.938 lat (msec) : 2000=27.11%, >=2000=32.48% 00:28:41.938 cpu : usr=0.05%, sys=1.30%, ctx=1645, majf=0, minf=32769 00:28:41.938 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=91.9% 00:28:41.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.938 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.938 issued rwts: total=782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.938 job1: (groupid=0, jobs=1): err= 0: pid=10778: Mon Jun 10 10:56:09 2024 00:28:41.938 read: IOPS=57, BW=57.3MiB/s (60.0MB/s)(579MiB/10112msec) 00:28:41.938 slat (usec): min=48, max=129126, avg=17314.06, stdev=21913.05 00:28:41.938 clat (msec): min=84, max=2914, avg=2048.11, stdev=561.73 00:28:41.938 lat (msec): min=122, max=2960, avg=2065.43, stdev=560.00 00:28:41.938 clat percentiles (msec): 00:28:41.938 | 1.00th=[ 171], 5.00th=[ 902], 10.00th=[ 1452], 20.00th=[ 1636], 00:28:41.938 | 30.00th=[ 1838], 40.00th=[ 2005], 50.00th=[ 2165], 60.00th=[ 2232], 00:28:41.938 | 70.00th=[ 2333], 80.00th=[ 2500], 90.00th=[ 2769], 95.00th=[ 2836], 00:28:41.938 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2903], 99.95th=[ 2903], 00:28:41.938 | 99.99th=[ 2903] 00:28:41.938 bw ( KiB/s): min=26624, max=122880, per=1.15%, avg=57728.00, stdev=23721.68, samples=16 00:28:41.938 iops : min= 26, max= 120, avg=56.37, stdev=23.17, samples=16 00:28:41.938 lat (msec) : 100=0.17%, 250=1.04%, 500=1.73%, 750=1.38%, 1000=1.21% 00:28:41.938 lat (msec) : 2000=33.85%, >=2000=60.62% 00:28:41.938 cpu : usr=0.05%, sys=1.24%, ctx=1713, majf=0, minf=32769 00:28:41.938 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:28:41.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.938 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.938 issued rwts: total=579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.938 job1: (groupid=0, jobs=1): err= 0: pid=10780: Mon Jun 10 10:56:09 2024 00:28:41.939 read: IOPS=63, BW=63.1MiB/s (66.2MB/s)(636MiB/10080msec) 00:28:41.939 slat (usec): min=31, max=190436, avg=15767.70, stdev=24904.73 00:28:41.939 clat (msec): min=48, max=2811, avg=1796.71, stdev=574.39 00:28:41.939 lat (msec): min=86, max=2821, avg=1812.48, stdev=574.27 00:28:41.939 clat percentiles (msec): 00:28:41.939 | 1.00th=[ 148], 5.00th=[ 558], 10.00th=[ 1200], 20.00th=[ 1418], 00:28:41.939 | 30.00th=[ 1519], 40.00th=[ 1737], 50.00th=[ 1754], 60.00th=[ 1921], 00:28:41.939 | 70.00th=[ 2165], 80.00th=[ 2299], 90.00th=[ 2534], 95.00th=[ 2635], 00:28:41.939 | 99.00th=[ 2735], 99.50th=[ 2769], 99.90th=[ 2802], 99.95th=[ 2802], 00:28:41.939 | 99.99th=[ 2802] 00:28:41.939 bw ( KiB/s): min=30720, max=157696, per=1.30%, avg=64992.50, stdev=33988.68, samples=16 00:28:41.939 iops : min= 30, max= 154, avg=63.44, stdev=33.19, samples=16 00:28:41.939 lat (msec) : 50=0.16%, 100=0.31%, 250=1.42%, 500=2.52%, 750=1.73% 00:28:41.939 lat (msec) : 1000=2.20%, 2000=54.40%, >=2000=37.26% 00:28:41.939 cpu : usr=0.04%, sys=1.10%, ctx=1574, majf=0, minf=32769 00:28:41.939 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:28:41.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.939 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.939 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.939 job1: (groupid=0, jobs=1): err= 0: pid=10781: Mon Jun 10 10:56:09 2024 00:28:41.939 read: IOPS=52, BW=52.2MiB/s (54.8MB/s)(527MiB/10088msec) 00:28:41.939 slat (usec): min=35, max=129723, avg=18974.54, stdev=21107.86 00:28:41.939 clat (msec): min=86, max=3194, avg=2259.68, stdev=626.53 00:28:41.939 lat (msec): min=145, max=3234, avg=2278.66, stdev=626.47 00:28:41.939 clat percentiles (msec): 00:28:41.939 | 1.00th=[ 239], 5.00th=[ 760], 10.00th=[ 1385], 20.00th=[ 1989], 00:28:41.939 | 30.00th=[ 2232], 40.00th=[ 2265], 50.00th=[ 2333], 60.00th=[ 2433], 00:28:41.939 | 70.00th=[ 2534], 80.00th=[ 2802], 90.00th=[ 2937], 95.00th=[ 3004], 00:28:41.939 | 99.00th=[ 3138], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:28:41.939 | 99.99th=[ 3205] 00:28:41.939 bw ( KiB/s): min=24576, max=81920, per=0.96%, avg=48188.24, stdev=15964.51, samples=17 00:28:41.939 iops : min= 24, max= 80, avg=47.06, stdev=15.59, samples=17 00:28:41.939 lat (msec) : 100=0.19%, 250=0.95%, 500=1.52%, 750=2.28%, 1000=2.09% 00:28:41.939 lat (msec) : 2000=13.09%, >=2000=79.89% 00:28:41.939 cpu : usr=0.03%, sys=1.06%, ctx=1618, majf=0, minf=32769 00:28:41.939 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:28:41.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.939 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.939 issued rwts: total=527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.939 job1: (groupid=0, jobs=1): err= 0: pid=10782: Mon Jun 10 10:56:09 2024 00:28:41.939 read: IOPS=46, BW=46.3MiB/s (48.6MB/s)(468MiB/10102msec) 00:28:41.939 slat (usec): min=106, max=102563, avg=21403.45, stdev=21454.19 00:28:41.939 clat (msec): min=83, max=3051, avg=2401.37, stdev=686.49 00:28:41.939 lat (msec): min=106, max=3076, avg=2422.77, stdev=685.24 00:28:41.939 clat percentiles (msec): 00:28:41.939 | 1.00th=[ 188], 5.00th=[ 676], 10.00th=[ 1133], 20.00th=[ 2299], 00:28:41.939 | 30.00th=[ 2534], 40.00th=[ 2601], 50.00th=[ 2635], 60.00th=[ 2668], 00:28:41.939 | 70.00th=[ 2735], 80.00th=[ 2802], 90.00th=[ 2903], 95.00th=[ 2970], 00:28:41.939 | 99.00th=[ 3037], 99.50th=[ 3037], 99.90th=[ 3037], 99.95th=[ 3037], 00:28:41.939 | 99.99th=[ 3037] 00:28:41.939 bw ( KiB/s): min=34816, max=63488, per=0.93%, avg=46421.33, stdev=7491.58, samples=15 00:28:41.939 iops : min= 34, max= 62, avg=45.33, stdev= 7.32, samples=15 00:28:41.939 lat (msec) : 100=0.21%, 250=1.50%, 500=2.78%, 750=1.07%, 1000=2.78% 00:28:41.939 lat (msec) : 2000=8.55%, >=2000=83.12% 00:28:41.939 cpu : usr=0.01%, sys=1.01%, ctx=1700, majf=0, minf=32769 00:28:41.939 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.5% 00:28:41.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.939 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.939 issued rwts: total=468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.939 job1: (groupid=0, jobs=1): err= 0: pid=10783: Mon Jun 10 10:56:09 2024 00:28:41.939 read: IOPS=52, BW=52.0MiB/s (54.6MB/s)(526MiB/10108msec) 00:28:41.939 slat (usec): min=361, max=100573, avg=19094.05, stdev=19580.96 00:28:41.939 clat (msec): min=62, max=3218, avg=2291.09, stdev=705.87 00:28:41.939 lat (msec): min=111, max=3228, avg=2310.18, stdev=706.25 00:28:41.939 clat percentiles (msec): 00:28:41.939 | 1.00th=[ 205], 5.00th=[ 743], 10.00th=[ 1183], 20.00th=[ 1703], 00:28:41.939 | 30.00th=[ 2232], 40.00th=[ 2467], 50.00th=[ 2534], 60.00th=[ 2635], 00:28:41.939 | 70.00th=[ 2735], 80.00th=[ 2802], 90.00th=[ 2937], 95.00th=[ 3004], 00:28:41.939 | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:28:41.939 | 99.99th=[ 3205] 00:28:41.939 bw ( KiB/s): min=28672, max=69493, per=0.90%, avg=45275.83, stdev=10838.91, samples=18 00:28:41.939 iops : min= 28, max= 67, avg=44.17, stdev=10.47, samples=18 00:28:41.939 lat (msec) : 100=0.19%, 250=1.52%, 500=0.95%, 750=2.47%, 1000=3.61% 00:28:41.939 lat (msec) : 2000=15.78%, >=2000=75.48% 00:28:41.939 cpu : usr=0.02%, sys=0.97%, ctx=1654, majf=0, minf=32769 00:28:41.939 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:28:41.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.939 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.939 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.939 job1: (groupid=0, jobs=1): err= 0: pid=10784: Mon Jun 10 10:56:09 2024 00:28:41.939 read: IOPS=52, BW=53.0MiB/s (55.6MB/s)(538MiB/10151msec) 00:28:41.939 slat (usec): min=52, max=119053, avg=18699.45, stdev=22955.93 00:28:41.939 clat (msec): min=88, max=3333, avg=2226.18, stdev=709.89 00:28:41.939 lat (msec): min=164, max=3349, avg=2244.88, stdev=708.72 00:28:41.939 clat percentiles (msec): 00:28:41.939 | 1.00th=[ 338], 5.00th=[ 785], 10.00th=[ 1301], 20.00th=[ 1586], 00:28:41.939 | 30.00th=[ 1921], 40.00th=[ 2123], 50.00th=[ 2433], 60.00th=[ 2534], 00:28:41.939 | 70.00th=[ 2702], 80.00th=[ 2903], 90.00th=[ 3037], 95.00th=[ 3138], 00:28:41.939 | 99.00th=[ 3205], 99.50th=[ 3272], 99.90th=[ 3339], 99.95th=[ 3339], 00:28:41.939 | 99.99th=[ 3339] 00:28:41.939 bw ( KiB/s): min=10240, max=151552, per=0.98%, avg=49392.94, stdev=30306.61, samples=17 00:28:41.939 iops : min= 10, max= 148, avg=48.24, stdev=29.60, samples=17 00:28:41.939 lat (msec) : 100=0.19%, 250=0.37%, 500=1.67%, 750=2.42%, 1000=1.86% 00:28:41.939 lat (msec) : 2000=26.02%, >=2000=67.47% 00:28:41.939 cpu : usr=0.00%, sys=1.18%, ctx=1647, majf=0, minf=32769 00:28:41.939 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:28:41.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.939 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.939 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.939 job1: (groupid=0, jobs=1): err= 0: pid=10785: Mon Jun 10 10:56:09 2024 00:28:41.939 read: IOPS=59, BW=59.5MiB/s (62.4MB/s)(598MiB/10054msec) 00:28:41.939 slat (usec): min=31, max=162703, avg=16722.80, stdev=20784.13 00:28:41.939 clat (msec): min=51, max=2959, avg=1800.46, stdev=786.50 00:28:41.939 lat (msec): min=55, max=3014, avg=1817.18, stdev=789.43 00:28:41.939 clat percentiles (msec): 00:28:41.939 | 1.00th=[ 66], 5.00th=[ 321], 10.00th=[ 709], 20.00th=[ 1011], 00:28:41.939 | 30.00th=[ 1284], 40.00th=[ 1687], 50.00th=[ 2022], 60.00th=[ 2299], 00:28:41.939 | 70.00th=[ 2400], 80.00th=[ 2500], 90.00th=[ 2601], 95.00th=[ 2869], 00:28:41.939 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:28:41.939 | 99.99th=[ 2970] 00:28:41.939 bw ( KiB/s): min= 4096, max=165888, per=1.25%, avg=62756.57, stdev=44024.54, samples=14 00:28:41.939 iops : min= 4, max= 162, avg=61.29, stdev=42.99, samples=14 00:28:41.939 lat (msec) : 100=1.84%, 250=1.51%, 500=3.34%, 750=6.69%, 1000=6.35% 00:28:41.939 lat (msec) : 2000=29.77%, >=2000=50.50% 00:28:41.939 cpu : usr=0.01%, sys=0.89%, ctx=1695, majf=0, minf=32769 00:28:41.939 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.5% 00:28:41.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.939 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.939 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.939 job1: (groupid=0, jobs=1): err= 0: pid=10786: Mon Jun 10 10:56:09 2024 00:28:41.939 read: IOPS=100, BW=101MiB/s (106MB/s)(1012MiB/10048msec) 00:28:41.939 slat (usec): min=33, max=100626, avg=9878.53, stdev=19560.37 00:28:41.939 clat (msec): min=47, max=2524, avg=1116.26, stdev=565.03 00:28:41.939 lat (msec): min=54, max=2533, avg=1126.13, stdev=567.79 00:28:41.939 clat percentiles (msec): 00:28:41.939 | 1.00th=[ 134], 5.00th=[ 651], 10.00th=[ 684], 20.00th=[ 760], 00:28:41.939 | 30.00th=[ 802], 40.00th=[ 835], 50.00th=[ 869], 60.00th=[ 894], 00:28:41.939 | 70.00th=[ 1045], 80.00th=[ 1687], 90.00th=[ 2265], 95.00th=[ 2366], 00:28:41.939 | 99.00th=[ 2433], 99.50th=[ 2467], 99.90th=[ 2500], 99.95th=[ 2534], 00:28:41.939 | 99.99th=[ 2534] 00:28:41.939 bw ( KiB/s): min=32768, max=190464, per=2.18%, avg=109568.00, stdev=59101.75, samples=16 00:28:41.939 iops : min= 32, max= 186, avg=107.00, stdev=57.72, samples=16 00:28:41.939 lat (msec) : 50=0.10%, 100=0.79%, 250=1.19%, 500=0.69%, 750=14.92% 00:28:41.939 lat (msec) : 1000=51.48%, 2000=18.77%, >=2000=12.06% 00:28:41.939 cpu : usr=0.01%, sys=1.28%, ctx=1530, majf=0, minf=32769 00:28:41.939 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:28:41.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.939 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.939 issued rwts: total=1012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.939 job1: (groupid=0, jobs=1): err= 0: pid=10787: Mon Jun 10 10:56:09 2024 00:28:41.939 read: IOPS=49, BW=49.8MiB/s (52.3MB/s)(502MiB/10072msec) 00:28:41.939 slat (usec): min=117, max=156612, avg=19921.26, stdev=19662.93 00:28:41.939 clat (msec): min=69, max=2972, avg=2290.10, stdev=686.27 00:28:41.939 lat (msec): min=73, max=3001, avg=2310.02, stdev=685.72 00:28:41.939 clat percentiles (msec): 00:28:41.939 | 1.00th=[ 132], 5.00th=[ 575], 10.00th=[ 1133], 20.00th=[ 2089], 00:28:41.939 | 30.00th=[ 2299], 40.00th=[ 2433], 50.00th=[ 2534], 60.00th=[ 2635], 00:28:41.939 | 70.00th=[ 2668], 80.00th=[ 2735], 90.00th=[ 2836], 95.00th=[ 2903], 00:28:41.940 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:28:41.940 | 99.99th=[ 2970] 00:28:41.940 bw ( KiB/s): min=12288, max=79872, per=0.96%, avg=48012.50, stdev=14845.30, samples=16 00:28:41.940 iops : min= 12, max= 78, avg=46.88, stdev=14.50, samples=16 00:28:41.940 lat (msec) : 100=0.60%, 250=1.59%, 500=2.39%, 750=2.19%, 1000=2.39% 00:28:41.940 lat (msec) : 2000=9.16%, >=2000=81.67% 00:28:41.940 cpu : usr=0.02%, sys=1.05%, ctx=1693, majf=0, minf=32769 00:28:41.940 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:28:41.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.940 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.940 issued rwts: total=502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.940 job1: (groupid=0, jobs=1): err= 0: pid=10788: Mon Jun 10 10:56:09 2024 00:28:41.940 read: IOPS=43, BW=43.9MiB/s (46.1MB/s)(442MiB/10061msec) 00:28:41.940 slat (usec): min=43, max=103543, avg=22623.78, stdev=22692.17 00:28:41.940 clat (msec): min=58, max=4035, avg=2613.28, stdev=883.78 00:28:41.940 lat (msec): min=62, max=4068, avg=2635.90, stdev=883.78 00:28:41.940 clat percentiles (msec): 00:28:41.940 | 1.00th=[ 128], 5.00th=[ 567], 10.00th=[ 1250], 20.00th=[ 2106], 00:28:41.940 | 30.00th=[ 2500], 40.00th=[ 2601], 50.00th=[ 2735], 60.00th=[ 2903], 00:28:41.940 | 70.00th=[ 3004], 80.00th=[ 3239], 90.00th=[ 3675], 95.00th=[ 3876], 00:28:41.940 | 99.00th=[ 4010], 99.50th=[ 4010], 99.90th=[ 4044], 99.95th=[ 4044], 00:28:41.940 | 99.99th=[ 4044] 00:28:41.940 bw ( KiB/s): min=12288, max=67584, per=0.75%, avg=37632.00, stdev=14107.44, samples=16 00:28:41.940 iops : min= 12, max= 66, avg=36.75, stdev=13.78, samples=16 00:28:41.940 lat (msec) : 100=0.68%, 250=1.81%, 500=2.04%, 750=2.04%, 1000=1.58% 00:28:41.940 lat (msec) : 2000=10.18%, >=2000=81.67% 00:28:41.940 cpu : usr=0.05%, sys=0.96%, ctx=1720, majf=0, minf=32769 00:28:41.940 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.7% 00:28:41.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.940 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.940 issued rwts: total=442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.940 job1: (groupid=0, jobs=1): err= 0: pid=10789: Mon Jun 10 10:56:09 2024 00:28:41.940 read: IOPS=64, BW=64.7MiB/s (67.8MB/s)(653MiB/10097msec) 00:28:41.940 slat (usec): min=35, max=220244, avg=15331.07, stdev=22207.38 00:28:41.940 clat (msec): min=84, max=2903, avg=1836.06, stdev=681.43 00:28:41.940 lat (msec): min=97, max=2904, avg=1851.39, stdev=682.08 00:28:41.940 clat percentiles (msec): 00:28:41.940 | 1.00th=[ 239], 5.00th=[ 439], 10.00th=[ 776], 20.00th=[ 1334], 00:28:41.940 | 30.00th=[ 1603], 40.00th=[ 1720], 50.00th=[ 1821], 60.00th=[ 1955], 00:28:41.940 | 70.00th=[ 2265], 80.00th=[ 2534], 90.00th=[ 2735], 95.00th=[ 2802], 00:28:41.940 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2903], 99.95th=[ 2903], 00:28:41.940 | 99.99th=[ 2903] 00:28:41.940 bw ( KiB/s): min= 8192, max=145408, per=1.26%, avg=63238.88, stdev=34044.52, samples=17 00:28:41.940 iops : min= 8, max= 142, avg=61.71, stdev=33.24, samples=17 00:28:41.940 lat (msec) : 100=0.31%, 250=0.77%, 500=4.90%, 750=3.68%, 1000=2.45% 00:28:41.940 lat (msec) : 2000=49.46%, >=2000=38.44% 00:28:41.940 cpu : usr=0.00%, sys=1.28%, ctx=1628, majf=0, minf=32769 00:28:41.940 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.4% 00:28:41.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.940 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.940 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.940 job1: (groupid=0, jobs=1): err= 0: pid=10790: Mon Jun 10 10:56:09 2024 00:28:41.940 read: IOPS=43, BW=44.0MiB/s (46.1MB/s)(443MiB/10070msec) 00:28:41.940 slat (usec): min=39, max=136611, avg=22587.73, stdev=27020.62 00:28:41.940 clat (msec): min=61, max=4807, avg=2579.25, stdev=1038.78 00:28:41.940 lat (msec): min=110, max=4869, avg=2601.84, stdev=1037.04 00:28:41.940 clat percentiles (msec): 00:28:41.940 | 1.00th=[ 220], 5.00th=[ 1062], 10.00th=[ 1804], 20.00th=[ 1888], 00:28:41.940 | 30.00th=[ 1938], 40.00th=[ 2089], 50.00th=[ 2299], 60.00th=[ 2467], 00:28:41.940 | 70.00th=[ 2869], 80.00th=[ 3540], 90.00th=[ 4329], 95.00th=[ 4597], 00:28:41.940 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:28:41.940 | 99.99th=[ 4799] 00:28:41.940 bw ( KiB/s): min=16384, max=77824, per=0.81%, avg=40448.00, stdev=19073.17, samples=16 00:28:41.940 iops : min= 16, max= 76, avg=39.50, stdev=18.63, samples=16 00:28:41.940 lat (msec) : 100=0.23%, 250=1.13%, 500=1.35%, 750=1.35%, 1000=0.90% 00:28:41.940 lat (msec) : 2000=26.64%, >=2000=68.40% 00:28:41.940 cpu : usr=0.02%, sys=0.90%, ctx=1662, majf=0, minf=32769 00:28:41.940 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:28:41.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.940 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.940 issued rwts: total=443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.940 job2: (groupid=0, jobs=1): err= 0: pid=10795: Mon Jun 10 10:56:09 2024 00:28:41.940 read: IOPS=63, BW=63.6MiB/s (66.6MB/s)(642MiB/10102msec) 00:28:41.940 slat (usec): min=35, max=99135, avg=15573.70, stdev=17671.81 00:28:41.940 clat (msec): min=100, max=3023, avg=1915.52, stdev=700.09 00:28:41.940 lat (msec): min=116, max=3044, avg=1931.09, stdev=702.81 00:28:41.940 clat percentiles (msec): 00:28:41.940 | 1.00th=[ 190], 5.00th=[ 718], 10.00th=[ 969], 20.00th=[ 1133], 00:28:41.940 | 30.00th=[ 1603], 40.00th=[ 1871], 50.00th=[ 2056], 60.00th=[ 2198], 00:28:41.940 | 70.00th=[ 2299], 80.00th=[ 2467], 90.00th=[ 2836], 95.00th=[ 2937], 00:28:41.940 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3037], 99.95th=[ 3037], 00:28:41.940 | 99.99th=[ 3037] 00:28:41.940 bw ( KiB/s): min=32768, max=79872, per=1.11%, avg=55511.58, stdev=16339.78, samples=19 00:28:41.940 iops : min= 32, max= 78, avg=54.21, stdev=15.96, samples=19 00:28:41.940 lat (msec) : 250=1.71%, 500=2.02%, 750=1.71%, 1000=6.39%, 2000=33.49% 00:28:41.940 lat (msec) : >=2000=54.67% 00:28:41.940 cpu : usr=0.02%, sys=1.41%, ctx=1638, majf=0, minf=32769 00:28:41.940 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:28:41.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.940 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.940 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.940 job2: (groupid=0, jobs=1): err= 0: pid=10796: Mon Jun 10 10:56:09 2024 00:28:41.940 read: IOPS=56, BW=56.9MiB/s (59.6MB/s)(572MiB/10059msec) 00:28:41.940 slat (usec): min=57, max=98909, avg=17481.60, stdev=20420.89 00:28:41.940 clat (msec): min=57, max=2777, avg=2051.40, stdev=662.14 00:28:41.940 lat (msec): min=61, max=2823, avg=2068.88, stdev=663.69 00:28:41.940 clat percentiles (msec): 00:28:41.940 | 1.00th=[ 86], 5.00th=[ 426], 10.00th=[ 1003], 20.00th=[ 1737], 00:28:41.940 | 30.00th=[ 1888], 40.00th=[ 2232], 50.00th=[ 2366], 60.00th=[ 2400], 00:28:41.940 | 70.00th=[ 2467], 80.00th=[ 2500], 90.00th=[ 2601], 95.00th=[ 2668], 00:28:41.940 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 2769], 99.95th=[ 2769], 00:28:41.940 | 99.99th=[ 2769] 00:28:41.940 bw ( KiB/s): min=36864, max=65536, per=1.05%, avg=52474.63, stdev=9372.30, samples=16 00:28:41.940 iops : min= 36, max= 64, avg=51.19, stdev= 9.22, samples=16 00:28:41.940 lat (msec) : 100=1.40%, 250=2.27%, 500=1.57%, 750=2.45%, 1000=2.27% 00:28:41.940 lat (msec) : 2000=25.17%, >=2000=64.86% 00:28:41.940 cpu : usr=0.01%, sys=0.95%, ctx=1559, majf=0, minf=32769 00:28:41.940 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:28:41.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.940 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.940 issued rwts: total=572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.940 job2: (groupid=0, jobs=1): err= 0: pid=10797: Mon Jun 10 10:56:09 2024 00:28:41.940 read: IOPS=73, BW=73.5MiB/s (77.1MB/s)(741MiB/10084msec) 00:28:41.940 slat (usec): min=34, max=104240, avg=13506.08, stdev=17403.61 00:28:41.940 clat (msec): min=73, max=2286, avg=1591.03, stdev=569.63 00:28:41.940 lat (msec): min=89, max=2298, avg=1604.53, stdev=572.23 00:28:41.940 clat percentiles (msec): 00:28:41.940 | 1.00th=[ 133], 5.00th=[ 355], 10.00th=[ 567], 20.00th=[ 1167], 00:28:41.940 | 30.00th=[ 1452], 40.00th=[ 1586], 50.00th=[ 1687], 60.00th=[ 1921], 00:28:41.940 | 70.00th=[ 2022], 80.00th=[ 2089], 90.00th=[ 2165], 95.00th=[ 2198], 00:28:41.940 | 99.00th=[ 2265], 99.50th=[ 2265], 99.90th=[ 2299], 99.95th=[ 2299], 00:28:41.940 | 99.99th=[ 2299] 00:28:41.940 bw ( KiB/s): min=10240, max=157696, per=1.47%, avg=73848.47, stdev=32353.14, samples=17 00:28:41.940 iops : min= 10, max= 154, avg=72.12, stdev=31.59, samples=17 00:28:41.940 lat (msec) : 100=0.67%, 250=1.62%, 500=6.21%, 750=4.18%, 1000=1.89% 00:28:41.940 lat (msec) : 2000=52.23%, >=2000=33.20% 00:28:41.940 cpu : usr=0.00%, sys=1.18%, ctx=1706, majf=0, minf=32769 00:28:41.940 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:28:41.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.940 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.940 issued rwts: total=741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.940 job2: (groupid=0, jobs=1): err= 0: pid=10798: Mon Jun 10 10:56:09 2024 00:28:41.940 read: IOPS=54, BW=54.3MiB/s (57.0MB/s)(547MiB/10066msec) 00:28:41.940 slat (usec): min=34, max=119347, avg=18282.75, stdev=21425.52 00:28:41.940 clat (msec): min=62, max=2925, avg=1986.67, stdev=646.19 00:28:41.940 lat (msec): min=71, max=2935, avg=2004.96, stdev=647.68 00:28:41.940 clat percentiles (msec): 00:28:41.940 | 1.00th=[ 78], 5.00th=[ 493], 10.00th=[ 1028], 20.00th=[ 1603], 00:28:41.940 | 30.00th=[ 1770], 40.00th=[ 1888], 50.00th=[ 2198], 60.00th=[ 2333], 00:28:41.940 | 70.00th=[ 2433], 80.00th=[ 2500], 90.00th=[ 2601], 95.00th=[ 2668], 00:28:41.940 | 99.00th=[ 2869], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937], 00:28:41.940 | 99.99th=[ 2937] 00:28:41.940 bw ( KiB/s): min=32768, max=114688, per=1.14%, avg=57328.20, stdev=19163.39, samples=15 00:28:41.940 iops : min= 32, max= 112, avg=55.93, stdev=18.71, samples=15 00:28:41.940 lat (msec) : 100=1.46%, 250=2.19%, 500=1.46%, 750=2.38%, 1000=2.01% 00:28:41.940 lat (msec) : 2000=35.28%, >=2000=55.21% 00:28:41.940 cpu : usr=0.02%, sys=0.91%, ctx=1674, majf=0, minf=32769 00:28:41.941 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.5% 00:28:41.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.941 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.941 issued rwts: total=547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.941 job2: (groupid=0, jobs=1): err= 0: pid=10799: Mon Jun 10 10:56:09 2024 00:28:41.941 read: IOPS=132, BW=133MiB/s (139MB/s)(1332MiB/10034msec) 00:28:41.941 slat (usec): min=33, max=57479, avg=7507.83, stdev=10862.83 00:28:41.941 clat (msec): min=29, max=1860, avg=842.03, stdev=250.47 00:28:41.941 lat (msec): min=51, max=1896, avg=849.54, stdev=252.50 00:28:41.941 clat percentiles (msec): 00:28:41.941 | 1.00th=[ 161], 5.00th=[ 510], 10.00th=[ 535], 20.00th=[ 634], 00:28:41.941 | 30.00th=[ 684], 40.00th=[ 802], 50.00th=[ 869], 60.00th=[ 894], 00:28:41.941 | 70.00th=[ 927], 80.00th=[ 1020], 90.00th=[ 1133], 95.00th=[ 1267], 00:28:41.941 | 99.00th=[ 1603], 99.50th=[ 1737], 99.90th=[ 1821], 99.95th=[ 1854], 00:28:41.941 | 99.99th=[ 1854] 00:28:41.941 bw ( KiB/s): min=24576, max=247808, per=2.97%, avg=148992.00, stdev=54300.97, samples=16 00:28:41.941 iops : min= 24, max= 242, avg=145.50, stdev=53.03, samples=16 00:28:41.941 lat (msec) : 50=0.08%, 100=0.45%, 250=0.90%, 500=2.25%, 750=32.58% 00:28:41.941 lat (msec) : 1000=42.57%, 2000=21.17% 00:28:41.941 cpu : usr=0.04%, sys=1.43%, ctx=1694, majf=0, minf=32769 00:28:41.941 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:28:41.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.941 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.941 issued rwts: total=1332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.941 job2: (groupid=0, jobs=1): err= 0: pid=10800: Mon Jun 10 10:56:09 2024 00:28:41.941 read: IOPS=60, BW=60.8MiB/s (63.8MB/s)(615MiB/10110msec) 00:28:41.941 slat (usec): min=36, max=96548, avg=16279.72, stdev=18803.28 00:28:41.941 clat (msec): min=94, max=2760, avg=1948.23, stdev=676.49 00:28:41.941 lat (msec): min=109, max=2775, avg=1964.51, stdev=678.23 00:28:41.941 clat percentiles (msec): 00:28:41.941 | 1.00th=[ 249], 5.00th=[ 609], 10.00th=[ 1003], 20.00th=[ 1250], 00:28:41.941 | 30.00th=[ 1418], 40.00th=[ 2039], 50.00th=[ 2232], 60.00th=[ 2333], 00:28:41.941 | 70.00th=[ 2500], 80.00th=[ 2534], 90.00th=[ 2601], 95.00th=[ 2668], 00:28:41.941 | 99.00th=[ 2735], 99.50th=[ 2735], 99.90th=[ 2769], 99.95th=[ 2769], 00:28:41.941 | 99.99th=[ 2769] 00:28:41.941 bw ( KiB/s): min=14336, max=131072, per=1.17%, avg=58669.18, stdev=27188.51, samples=17 00:28:41.941 iops : min= 14, max= 128, avg=57.29, stdev=26.55, samples=17 00:28:41.941 lat (msec) : 100=0.16%, 250=0.98%, 500=2.44%, 750=2.60%, 1000=3.58% 00:28:41.941 lat (msec) : 2000=28.13%, >=2000=62.11% 00:28:41.941 cpu : usr=0.01%, sys=1.52%, ctx=1598, majf=0, minf=32769 00:28:41.941 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:28:41.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.941 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.941 issued rwts: total=615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.941 job2: (groupid=0, jobs=1): err= 0: pid=10802: Mon Jun 10 10:56:09 2024 00:28:41.941 read: IOPS=49, BW=49.9MiB/s (52.3MB/s)(503MiB/10077msec) 00:28:41.941 slat (usec): min=33, max=148575, avg=19880.50, stdev=23086.93 00:28:41.941 clat (msec): min=75, max=3090, avg=2389.13, stdev=782.42 00:28:41.941 lat (msec): min=125, max=3125, avg=2409.01, stdev=783.81 00:28:41.941 clat percentiles (msec): 00:28:41.941 | 1.00th=[ 213], 5.00th=[ 527], 10.00th=[ 852], 20.00th=[ 1938], 00:28:41.941 | 30.00th=[ 2567], 40.00th=[ 2668], 50.00th=[ 2769], 60.00th=[ 2802], 00:28:41.941 | 70.00th=[ 2836], 80.00th=[ 2903], 90.00th=[ 2937], 95.00th=[ 3004], 00:28:41.941 | 99.00th=[ 3071], 99.50th=[ 3071], 99.90th=[ 3104], 99.95th=[ 3104], 00:28:41.941 | 99.99th=[ 3104] 00:28:41.941 bw ( KiB/s): min=16384, max=71680, per=0.90%, avg=45292.59, stdev=12687.52, samples=17 00:28:41.941 iops : min= 16, max= 70, avg=44.18, stdev=12.43, samples=17 00:28:41.941 lat (msec) : 100=0.20%, 250=1.39%, 500=3.38%, 750=3.98%, 1000=2.19% 00:28:41.941 lat (msec) : 2000=9.54%, >=2000=79.32% 00:28:41.941 cpu : usr=0.02%, sys=1.18%, ctx=1619, majf=0, minf=32769 00:28:41.941 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:28:41.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.941 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.941 issued rwts: total=503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.941 job2: (groupid=0, jobs=1): err= 0: pid=10803: Mon Jun 10 10:56:09 2024 00:28:41.941 read: IOPS=120, BW=121MiB/s (127MB/s)(1213MiB/10036msec) 00:28:41.941 slat (usec): min=34, max=63151, avg=8248.86, stdev=9215.54 00:28:41.941 clat (msec): min=25, max=1730, avg=942.94, stdev=274.44 00:28:41.941 lat (msec): min=41, max=1746, avg=951.19, stdev=275.89 00:28:41.941 clat percentiles (msec): 00:28:41.941 | 1.00th=[ 174], 5.00th=[ 659], 10.00th=[ 701], 20.00th=[ 735], 00:28:41.941 | 30.00th=[ 751], 40.00th=[ 793], 50.00th=[ 902], 60.00th=[ 1011], 00:28:41.941 | 70.00th=[ 1116], 80.00th=[ 1217], 90.00th=[ 1284], 95.00th=[ 1385], 00:28:41.941 | 99.00th=[ 1586], 99.50th=[ 1670], 99.90th=[ 1720], 99.95th=[ 1737], 00:28:41.941 | 99.99th=[ 1737] 00:28:41.941 bw ( KiB/s): min=69632, max=198656, per=2.65%, avg=132864.00, stdev=39371.88, samples=16 00:28:41.941 iops : min= 68, max= 194, avg=129.75, stdev=38.45, samples=16 00:28:41.941 lat (msec) : 50=0.16%, 100=0.41%, 250=0.99%, 500=2.23%, 750=26.13% 00:28:41.941 lat (msec) : 1000=29.18%, 2000=40.89% 00:28:41.941 cpu : usr=0.02%, sys=1.68%, ctx=1728, majf=0, minf=32769 00:28:41.941 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:28:41.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.941 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.941 issued rwts: total=1213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.941 job2: (groupid=0, jobs=1): err= 0: pid=10804: Mon Jun 10 10:56:09 2024 00:28:41.941 read: IOPS=94, BW=94.7MiB/s (99.3MB/s)(956MiB/10100msec) 00:28:41.941 slat (usec): min=33, max=111099, avg=10478.13, stdev=20683.48 00:28:41.941 clat (msec): min=79, max=2556, avg=1215.50, stdev=569.09 00:28:41.941 lat (msec): min=115, max=2573, avg=1225.98, stdev=571.96 00:28:41.941 clat percentiles (msec): 00:28:41.941 | 1.00th=[ 203], 5.00th=[ 726], 10.00th=[ 760], 20.00th=[ 827], 00:28:41.941 | 30.00th=[ 835], 40.00th=[ 860], 50.00th=[ 911], 60.00th=[ 1003], 00:28:41.941 | 70.00th=[ 1469], 80.00th=[ 1871], 90.00th=[ 2165], 95.00th=[ 2265], 00:28:41.941 | 99.00th=[ 2500], 99.50th=[ 2534], 99.90th=[ 2567], 99.95th=[ 2567], 00:28:41.941 | 99.99th=[ 2567] 00:28:41.941 bw ( KiB/s): min=22528, max=184320, per=1.99%, avg=99749.65, stdev=52742.29, samples=17 00:28:41.941 iops : min= 22, max= 180, avg=97.41, stdev=51.51, samples=17 00:28:41.941 lat (msec) : 100=0.10%, 250=1.67%, 500=1.15%, 750=5.65%, 1000=51.26% 00:28:41.941 lat (msec) : 2000=23.33%, >=2000=16.84% 00:28:41.941 cpu : usr=0.06%, sys=1.28%, ctx=1531, majf=0, minf=32769 00:28:41.941 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:28:41.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.941 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.941 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.941 job2: (groupid=0, jobs=1): err= 0: pid=10805: Mon Jun 10 10:56:09 2024 00:28:41.941 read: IOPS=62, BW=62.7MiB/s (65.7MB/s)(631MiB/10066msec) 00:28:41.941 slat (usec): min=32, max=128932, avg=15867.36, stdev=20420.98 00:28:41.941 clat (msec): min=50, max=2889, avg=1923.21, stdev=678.26 00:28:41.941 lat (msec): min=68, max=2928, avg=1939.07, stdev=680.53 00:28:41.941 clat percentiles (msec): 00:28:41.941 | 1.00th=[ 83], 5.00th=[ 523], 10.00th=[ 785], 20.00th=[ 1318], 00:28:41.941 | 30.00th=[ 1787], 40.00th=[ 1905], 50.00th=[ 2106], 60.00th=[ 2232], 00:28:41.941 | 70.00th=[ 2333], 80.00th=[ 2467], 90.00th=[ 2702], 95.00th=[ 2769], 00:28:41.941 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2903], 99.95th=[ 2903], 00:28:41.941 | 99.99th=[ 2903] 00:28:41.941 bw ( KiB/s): min=28672, max=94208, per=1.13%, avg=56734.65, stdev=16474.81, samples=17 00:28:41.942 iops : min= 28, max= 92, avg=55.35, stdev=16.08, samples=17 00:28:41.942 lat (msec) : 100=1.27%, 250=1.43%, 500=1.74%, 750=3.33%, 1000=3.80% 00:28:41.942 lat (msec) : 2000=32.96%, >=2000=55.47% 00:28:41.942 cpu : usr=0.03%, sys=1.35%, ctx=1586, majf=0, minf=32769 00:28:41.942 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:28:41.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.942 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.942 issued rwts: total=631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.942 job2: (groupid=0, jobs=1): err= 0: pid=10806: Mon Jun 10 10:56:09 2024 00:28:41.942 read: IOPS=52, BW=52.5MiB/s (55.0MB/s)(530MiB/10102msec) 00:28:41.942 slat (usec): min=99, max=130510, avg=18877.54, stdev=20292.36 00:28:41.942 clat (msec): min=94, max=3396, avg=2261.90, stdev=696.56 00:28:41.942 lat (msec): min=154, max=3424, avg=2280.78, stdev=696.64 00:28:41.942 clat percentiles (msec): 00:28:41.942 | 1.00th=[ 288], 5.00th=[ 793], 10.00th=[ 1318], 20.00th=[ 1703], 00:28:41.942 | 30.00th=[ 2056], 40.00th=[ 2232], 50.00th=[ 2433], 60.00th=[ 2534], 00:28:41.942 | 70.00th=[ 2635], 80.00th=[ 2769], 90.00th=[ 3071], 95.00th=[ 3272], 00:28:41.942 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3406], 99.95th=[ 3406], 00:28:41.942 | 99.99th=[ 3406] 00:28:41.942 bw ( KiB/s): min=28672, max=92160, per=0.97%, avg=48549.65, stdev=16539.50, samples=17 00:28:41.942 iops : min= 28, max= 90, avg=47.41, stdev=16.15, samples=17 00:28:41.942 lat (msec) : 100=0.19%, 250=0.75%, 500=1.32%, 750=2.08%, 1000=3.40% 00:28:41.942 lat (msec) : 2000=21.51%, >=2000=70.75% 00:28:41.942 cpu : usr=0.00%, sys=1.36%, ctx=1667, majf=0, minf=32769 00:28:41.942 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:28:41.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.942 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.942 issued rwts: total=530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.942 job2: (groupid=0, jobs=1): err= 0: pid=10807: Mon Jun 10 10:56:09 2024 00:28:41.942 read: IOPS=58, BW=58.1MiB/s (61.0MB/s)(586MiB/10081msec) 00:28:41.942 slat (usec): min=35, max=99624, avg=17075.24, stdev=18915.81 00:28:41.942 clat (msec): min=72, max=3309, avg=1949.05, stdev=802.15 00:28:41.942 lat (msec): min=82, max=3311, avg=1966.12, stdev=805.14 00:28:41.942 clat percentiles (msec): 00:28:41.942 | 1.00th=[ 161], 5.00th=[ 451], 10.00th=[ 844], 20.00th=[ 1385], 00:28:41.942 | 30.00th=[ 1536], 40.00th=[ 1687], 50.00th=[ 1821], 60.00th=[ 2198], 00:28:41.942 | 70.00th=[ 2400], 80.00th=[ 2836], 90.00th=[ 3071], 95.00th=[ 3171], 00:28:41.942 | 99.00th=[ 3272], 99.50th=[ 3306], 99.90th=[ 3306], 99.95th=[ 3306], 00:28:41.942 | 99.99th=[ 3306] 00:28:41.942 bw ( KiB/s): min=32768, max=147456, per=1.17%, avg=58684.81, stdev=28846.62, samples=16 00:28:41.942 iops : min= 32, max= 144, avg=57.25, stdev=28.13, samples=16 00:28:41.942 lat (msec) : 100=0.51%, 250=1.88%, 500=2.90%, 750=3.07%, 1000=3.07% 00:28:41.942 lat (msec) : 2000=44.88%, >=2000=43.69% 00:28:41.942 cpu : usr=0.04%, sys=0.93%, ctx=1660, majf=0, minf=32769 00:28:41.942 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:28:41.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.942 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.942 issued rwts: total=586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.942 job2: (groupid=0, jobs=1): err= 0: pid=10808: Mon Jun 10 10:56:09 2024 00:28:41.942 read: IOPS=58, BW=58.6MiB/s (61.5MB/s)(593MiB/10111msec) 00:28:41.942 slat (usec): min=31, max=108229, avg=16911.63, stdev=20147.64 00:28:41.942 clat (msec): min=80, max=2451, avg=1967.68, stdev=484.90 00:28:41.942 lat (msec): min=129, max=2483, avg=1984.60, stdev=483.56 00:28:41.942 clat percentiles (msec): 00:28:41.942 | 1.00th=[ 188], 5.00th=[ 743], 10.00th=[ 1217], 20.00th=[ 1888], 00:28:41.942 | 30.00th=[ 2005], 40.00th=[ 2089], 50.00th=[ 2140], 60.00th=[ 2165], 00:28:41.942 | 70.00th=[ 2198], 80.00th=[ 2232], 90.00th=[ 2333], 95.00th=[ 2366], 00:28:41.942 | 99.00th=[ 2433], 99.50th=[ 2467], 99.90th=[ 2467], 99.95th=[ 2467], 00:28:41.942 | 99.99th=[ 2467] 00:28:41.942 bw ( KiB/s): min= 4096, max=98304, per=1.12%, avg=56018.82, stdev=20784.19, samples=17 00:28:41.942 iops : min= 4, max= 96, avg=54.71, stdev=20.30, samples=17 00:28:41.942 lat (msec) : 100=0.17%, 250=1.01%, 500=2.19%, 750=1.69%, 1000=3.54% 00:28:41.942 lat (msec) : 2000=20.24%, >=2000=71.16% 00:28:41.942 cpu : usr=0.00%, sys=1.05%, ctx=1538, majf=0, minf=32769 00:28:41.942 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:28:41.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.942 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.942 issued rwts: total=593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.942 job3: (groupid=0, jobs=1): err= 0: pid=10820: Mon Jun 10 10:56:09 2024 00:28:41.942 read: IOPS=54, BW=54.1MiB/s (56.8MB/s)(546MiB/10084msec) 00:28:41.942 slat (usec): min=32, max=101513, avg=18314.17, stdev=18695.53 00:28:41.942 clat (msec): min=82, max=3447, avg=2112.69, stdev=927.59 00:28:41.942 lat (msec): min=87, max=3462, avg=2131.00, stdev=931.97 00:28:41.942 clat percentiles (msec): 00:28:41.942 | 1.00th=[ 134], 5.00th=[ 439], 10.00th=[ 693], 20.00th=[ 1385], 00:28:41.942 | 30.00th=[ 1586], 40.00th=[ 1804], 50.00th=[ 2106], 60.00th=[ 2534], 00:28:41.942 | 70.00th=[ 2937], 80.00th=[ 3071], 90.00th=[ 3272], 95.00th=[ 3339], 00:28:41.942 | 99.00th=[ 3373], 99.50th=[ 3440], 99.90th=[ 3440], 99.95th=[ 3440], 00:28:41.942 | 99.99th=[ 3440] 00:28:41.942 bw ( KiB/s): min= 8192, max=94208, per=1.07%, avg=53636.81, stdev=23409.98, samples=16 00:28:41.942 iops : min= 8, max= 92, avg=52.37, stdev=22.86, samples=16 00:28:41.942 lat (msec) : 100=0.37%, 250=1.65%, 500=4.03%, 750=4.40%, 1000=4.21% 00:28:41.942 lat (msec) : 2000=31.87%, >=2000=53.48% 00:28:41.942 cpu : usr=0.01%, sys=0.97%, ctx=1684, majf=0, minf=32769 00:28:41.942 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.5% 00:28:41.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.942 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.942 issued rwts: total=546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.942 job3: (groupid=0, jobs=1): err= 0: pid=10821: Mon Jun 10 10:56:09 2024 00:28:41.942 read: IOPS=55, BW=55.5MiB/s (58.2MB/s)(561MiB/10102msec) 00:28:41.942 slat (usec): min=43, max=160112, avg=17826.53, stdev=21423.32 00:28:41.942 clat (msec): min=99, max=3659, avg=2129.63, stdev=915.63 00:28:41.942 lat (msec): min=101, max=3671, avg=2147.45, stdev=919.50 00:28:41.942 clat percentiles (msec): 00:28:41.942 | 1.00th=[ 106], 5.00th=[ 305], 10.00th=[ 567], 20.00th=[ 1653], 00:28:41.942 | 30.00th=[ 1787], 40.00th=[ 1854], 50.00th=[ 2056], 60.00th=[ 2534], 00:28:41.942 | 70.00th=[ 2702], 80.00th=[ 2970], 90.00th=[ 3339], 95.00th=[ 3406], 00:28:41.942 | 99.00th=[ 3641], 99.50th=[ 3641], 99.90th=[ 3675], 99.95th=[ 3675], 00:28:41.942 | 99.99th=[ 3675] 00:28:41.942 bw ( KiB/s): min=10240, max=116736, per=1.04%, avg=52284.24, stdev=25478.03, samples=17 00:28:41.942 iops : min= 10, max= 114, avg=51.06, stdev=24.88, samples=17 00:28:41.942 lat (msec) : 100=0.18%, 250=4.10%, 500=5.17%, 750=1.96%, 1000=1.96% 00:28:41.942 lat (msec) : 2000=34.22%, >=2000=52.41% 00:28:41.942 cpu : usr=0.00%, sys=1.25%, ctx=1777, majf=0, minf=32769 00:28:41.942 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.8% 00:28:41.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.942 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.942 issued rwts: total=561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.942 job3: (groupid=0, jobs=1): err= 0: pid=10822: Mon Jun 10 10:56:09 2024 00:28:41.942 read: IOPS=158, BW=158MiB/s (166MB/s)(1599MiB/10092msec) 00:28:41.942 slat (usec): min=30, max=103012, avg=6251.70, stdev=17203.56 00:28:41.942 clat (msec): min=87, max=993, avg=770.98, stdev=123.69 00:28:41.942 lat (msec): min=91, max=999, avg=777.24, stdev=124.38 00:28:41.942 clat percentiles (msec): 00:28:41.942 | 1.00th=[ 188], 5.00th=[ 592], 10.00th=[ 684], 20.00th=[ 735], 00:28:41.942 | 30.00th=[ 760], 40.00th=[ 776], 50.00th=[ 785], 60.00th=[ 793], 00:28:41.942 | 70.00th=[ 818], 80.00th=[ 844], 90.00th=[ 869], 95.00th=[ 911], 00:28:41.942 | 99.00th=[ 978], 99.50th=[ 986], 99.90th=[ 986], 99.95th=[ 995], 00:28:41.942 | 99.99th=[ 995] 00:28:41.942 bw ( KiB/s): min=38912, max=192512, per=3.16%, avg=158666.11, stdev=32434.70, samples=19 00:28:41.942 iops : min= 38, max= 188, avg=154.95, stdev=31.67, samples=19 00:28:41.942 lat (msec) : 100=0.44%, 250=1.19%, 500=2.50%, 750=23.33%, 1000=72.55% 00:28:41.942 cpu : usr=0.07%, sys=2.13%, ctx=1481, majf=0, minf=32769 00:28:41.942 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:28:41.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.942 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.942 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.942 job3: (groupid=0, jobs=1): err= 0: pid=10823: Mon Jun 10 10:56:09 2024 00:28:41.942 read: IOPS=52, BW=52.3MiB/s (54.8MB/s)(527MiB/10079msec) 00:28:41.942 slat (usec): min=29, max=92356, avg=18978.81, stdev=17416.78 00:28:41.942 clat (msec): min=74, max=3272, avg=2158.21, stdev=748.21 00:28:41.942 lat (msec): min=89, max=3301, avg=2177.19, stdev=750.52 00:28:41.942 clat percentiles (msec): 00:28:41.942 | 1.00th=[ 157], 5.00th=[ 514], 10.00th=[ 768], 20.00th=[ 1603], 00:28:41.942 | 30.00th=[ 1989], 40.00th=[ 2299], 50.00th=[ 2366], 60.00th=[ 2467], 00:28:41.942 | 70.00th=[ 2601], 80.00th=[ 2702], 90.00th=[ 2937], 95.00th=[ 3071], 00:28:41.942 | 99.00th=[ 3239], 99.50th=[ 3239], 99.90th=[ 3272], 99.95th=[ 3272], 00:28:41.942 | 99.99th=[ 3272] 00:28:41.942 bw ( KiB/s): min= 2048, max=77824, per=1.02%, avg=51200.00, stdev=20751.27, samples=16 00:28:41.942 iops : min= 2, max= 76, avg=50.00, stdev=20.26, samples=16 00:28:41.942 lat (msec) : 100=0.38%, 250=1.33%, 500=2.09%, 750=6.07%, 1000=1.71% 00:28:41.942 lat (msec) : 2000=18.60%, >=2000=69.83% 00:28:41.942 cpu : usr=0.00%, sys=1.03%, ctx=1826, majf=0, minf=32769 00:28:41.942 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:28:41.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.942 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.942 issued rwts: total=527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.943 job3: (groupid=0, jobs=1): err= 0: pid=10824: Mon Jun 10 10:56:09 2024 00:28:41.943 read: IOPS=49, BW=49.3MiB/s (51.7MB/s)(497MiB/10083msec) 00:28:41.943 slat (usec): min=357, max=128550, avg=20125.44, stdev=18386.27 00:28:41.943 clat (msec): min=78, max=3232, avg=2229.55, stdev=831.07 00:28:41.943 lat (msec): min=86, max=3258, avg=2249.67, stdev=833.63 00:28:41.943 clat percentiles (msec): 00:28:41.943 | 1.00th=[ 97], 5.00th=[ 342], 10.00th=[ 592], 20.00th=[ 1653], 00:28:41.943 | 30.00th=[ 2232], 40.00th=[ 2467], 50.00th=[ 2567], 60.00th=[ 2601], 00:28:41.943 | 70.00th=[ 2702], 80.00th=[ 2769], 90.00th=[ 2970], 95.00th=[ 3171], 00:28:41.943 | 99.00th=[ 3205], 99.50th=[ 3239], 99.90th=[ 3239], 99.95th=[ 3239], 00:28:41.943 | 99.99th=[ 3239] 00:28:41.943 bw ( KiB/s): min=32768, max=98966, per=1.01%, avg=50424.93, stdev=16860.06, samples=15 00:28:41.943 iops : min= 32, max= 96, avg=49.20, stdev=16.33, samples=15 00:28:41.943 lat (msec) : 100=1.01%, 250=1.81%, 500=5.43%, 750=3.42%, 1000=2.41% 00:28:41.943 lat (msec) : 2000=9.66%, >=2000=76.26% 00:28:41.943 cpu : usr=0.03%, sys=0.86%, ctx=1726, majf=0, minf=32769 00:28:41.943 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.3% 00:28:41.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.943 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.943 issued rwts: total=497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.943 job3: (groupid=0, jobs=1): err= 0: pid=10825: Mon Jun 10 10:56:09 2024 00:28:41.943 read: IOPS=43, BW=43.1MiB/s (45.2MB/s)(435MiB/10083msec) 00:28:41.943 slat (usec): min=34, max=202322, avg=22989.23, stdev=25564.10 00:28:41.943 clat (msec): min=80, max=4814, avg=2483.56, stdev=1282.12 00:28:41.943 lat (msec): min=83, max=4841, avg=2506.55, stdev=1287.83 00:28:41.943 clat percentiles (msec): 00:28:41.943 | 1.00th=[ 163], 5.00th=[ 426], 10.00th=[ 751], 20.00th=[ 1485], 00:28:41.943 | 30.00th=[ 1888], 40.00th=[ 2039], 50.00th=[ 2165], 60.00th=[ 2333], 00:28:41.943 | 70.00th=[ 3507], 80.00th=[ 4010], 90.00th=[ 4329], 95.00th=[ 4597], 00:28:41.943 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:28:41.943 | 99.99th=[ 4799] 00:28:41.943 bw ( KiB/s): min= 8192, max=81920, per=0.90%, avg=45016.50, stdev=24573.29, samples=14 00:28:41.943 iops : min= 8, max= 80, avg=43.93, stdev=23.96, samples=14 00:28:41.943 lat (msec) : 100=0.92%, 250=1.61%, 500=3.45%, 750=4.14%, 1000=3.22% 00:28:41.943 lat (msec) : 2000=24.37%, >=2000=62.30% 00:28:41.943 cpu : usr=0.02%, sys=0.86%, ctx=1733, majf=0, minf=32769 00:28:41.943 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:28:41.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.943 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.943 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.943 job3: (groupid=0, jobs=1): err= 0: pid=10826: Mon Jun 10 10:56:09 2024 00:28:41.943 read: IOPS=47, BW=47.7MiB/s (50.0MB/s)(480MiB/10070msec) 00:28:41.943 slat (usec): min=33, max=161491, avg=20830.41, stdev=21527.72 00:28:41.943 clat (msec): min=69, max=3378, avg=2334.63, stdev=832.04 00:28:41.943 lat (msec): min=70, max=3382, avg=2355.46, stdev=833.86 00:28:41.943 clat percentiles (msec): 00:28:41.943 | 1.00th=[ 79], 5.00th=[ 351], 10.00th=[ 911], 20.00th=[ 1720], 00:28:41.943 | 30.00th=[ 2265], 40.00th=[ 2333], 50.00th=[ 2500], 60.00th=[ 2735], 00:28:41.943 | 70.00th=[ 2937], 80.00th=[ 3004], 90.00th=[ 3104], 95.00th=[ 3205], 00:28:41.943 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3373], 99.95th=[ 3373], 00:28:41.943 | 99.99th=[ 3373] 00:28:41.943 bw ( KiB/s): min=20480, max=59392, per=0.90%, avg=45154.50, stdev=10988.04, samples=16 00:28:41.943 iops : min= 20, max= 58, avg=44.06, stdev=10.69, samples=16 00:28:41.943 lat (msec) : 100=2.08%, 250=1.67%, 500=2.29%, 750=1.04%, 1000=3.75% 00:28:41.943 lat (msec) : 2000=11.46%, >=2000=77.71% 00:28:41.943 cpu : usr=0.00%, sys=0.99%, ctx=1643, majf=0, minf=32769 00:28:41.943 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.7%, >=64=86.9% 00:28:41.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.943 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.943 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.943 job3: (groupid=0, jobs=1): err= 0: pid=10828: Mon Jun 10 10:56:09 2024 00:28:41.943 read: IOPS=53, BW=53.9MiB/s (56.5MB/s)(544MiB/10092msec) 00:28:41.943 slat (usec): min=43, max=106991, avg=18391.78, stdev=19641.42 00:28:41.943 clat (msec): min=84, max=2921, avg=2234.21, stdev=660.80 00:28:41.943 lat (msec): min=105, max=2942, avg=2252.60, stdev=660.97 00:28:41.943 clat percentiles (msec): 00:28:41.943 | 1.00th=[ 184], 5.00th=[ 634], 10.00th=[ 1083], 20.00th=[ 2022], 00:28:41.943 | 30.00th=[ 2123], 40.00th=[ 2333], 50.00th=[ 2433], 60.00th=[ 2601], 00:28:41.943 | 70.00th=[ 2668], 80.00th=[ 2735], 90.00th=[ 2802], 95.00th=[ 2836], 00:28:41.943 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937], 00:28:41.943 | 99.99th=[ 2937] 00:28:41.943 bw ( KiB/s): min= 6144, max=79872, per=1.00%, avg=50236.24, stdev=16796.68, samples=17 00:28:41.943 iops : min= 6, max= 78, avg=49.06, stdev=16.40, samples=17 00:28:41.943 lat (msec) : 100=0.18%, 250=1.65%, 500=2.02%, 750=2.39%, 1000=2.94% 00:28:41.943 lat (msec) : 2000=9.93%, >=2000=80.88% 00:28:41.943 cpu : usr=0.01%, sys=1.42%, ctx=1806, majf=0, minf=32267 00:28:41.943 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:28:41.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.943 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.943 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.943 job3: (groupid=0, jobs=1): err= 0: pid=10829: Mon Jun 10 10:56:09 2024 00:28:41.943 read: IOPS=57, BW=57.5MiB/s (60.3MB/s)(582MiB/10121msec) 00:28:41.943 slat (usec): min=33, max=106180, avg=17263.43, stdev=17147.24 00:28:41.943 clat (msec): min=70, max=2710, avg=2047.19, stdev=550.02 00:28:41.943 lat (msec): min=147, max=2719, avg=2064.46, stdev=549.97 00:28:41.943 clat percentiles (msec): 00:28:41.943 | 1.00th=[ 205], 5.00th=[ 793], 10.00th=[ 1200], 20.00th=[ 1821], 00:28:41.943 | 30.00th=[ 1921], 40.00th=[ 2056], 50.00th=[ 2165], 60.00th=[ 2333], 00:28:41.943 | 70.00th=[ 2400], 80.00th=[ 2467], 90.00th=[ 2534], 95.00th=[ 2567], 00:28:41.943 | 99.00th=[ 2668], 99.50th=[ 2702], 99.90th=[ 2702], 99.95th=[ 2702], 00:28:41.943 | 99.99th=[ 2702] 00:28:41.943 bw ( KiB/s): min=12288, max=79872, per=1.09%, avg=54693.65, stdev=16634.32, samples=17 00:28:41.943 iops : min= 12, max= 78, avg=53.41, stdev=16.24, samples=17 00:28:41.943 lat (msec) : 100=0.17%, 250=1.20%, 500=2.06%, 750=1.55%, 1000=2.41% 00:28:41.943 lat (msec) : 2000=25.95%, >=2000=66.67% 00:28:41.943 cpu : usr=0.01%, sys=1.25%, ctx=1757, majf=0, minf=32769 00:28:41.943 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:28:41.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.943 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.943 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.943 job3: (groupid=0, jobs=1): err= 0: pid=10830: Mon Jun 10 10:56:09 2024 00:28:41.943 read: IOPS=47, BW=47.1MiB/s (49.4MB/s)(475MiB/10089msec) 00:28:41.943 slat (usec): min=36, max=124359, avg=21084.24, stdev=17220.01 00:28:41.943 clat (msec): min=71, max=3463, avg=2251.96, stdev=770.66 00:28:41.943 lat (msec): min=95, max=3474, avg=2273.04, stdev=772.76 00:28:41.943 clat percentiles (msec): 00:28:41.943 | 1.00th=[ 131], 5.00th=[ 550], 10.00th=[ 927], 20.00th=[ 1854], 00:28:41.943 | 30.00th=[ 2165], 40.00th=[ 2232], 50.00th=[ 2299], 60.00th=[ 2500], 00:28:41.943 | 70.00th=[ 2769], 80.00th=[ 2903], 90.00th=[ 3004], 95.00th=[ 3272], 00:28:41.943 | 99.00th=[ 3406], 99.50th=[ 3440], 99.90th=[ 3473], 99.95th=[ 3473], 00:28:41.943 | 99.99th=[ 3473] 00:28:41.943 bw ( KiB/s): min=16384, max=71680, per=1.01%, avg=50767.29, stdev=12752.38, samples=14 00:28:41.943 iops : min= 16, max= 70, avg=49.57, stdev=12.46, samples=14 00:28:41.943 lat (msec) : 100=0.42%, 250=1.89%, 500=2.53%, 750=2.95%, 1000=2.74% 00:28:41.943 lat (msec) : 2000=10.95%, >=2000=78.53% 00:28:41.943 cpu : usr=0.00%, sys=0.89%, ctx=1711, majf=0, minf=32769 00:28:41.943 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.7% 00:28:41.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.943 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.943 issued rwts: total=475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.943 job3: (groupid=0, jobs=1): err= 0: pid=10831: Mon Jun 10 10:56:09 2024 00:28:41.943 read: IOPS=65, BW=65.2MiB/s (68.4MB/s)(657MiB/10072msec) 00:28:41.943 slat (usec): min=34, max=126884, avg=15221.05, stdev=21553.57 00:28:41.943 clat (msec): min=69, max=3132, avg=1830.98, stdev=733.16 00:28:41.943 lat (msec): min=81, max=3145, avg=1846.20, stdev=735.43 00:28:41.943 clat percentiles (msec): 00:28:41.943 | 1.00th=[ 138], 5.00th=[ 693], 10.00th=[ 961], 20.00th=[ 1250], 00:28:41.943 | 30.00th=[ 1368], 40.00th=[ 1636], 50.00th=[ 1737], 60.00th=[ 1838], 00:28:41.943 | 70.00th=[ 2165], 80.00th=[ 2735], 90.00th=[ 2903], 95.00th=[ 3004], 00:28:41.943 | 99.00th=[ 3104], 99.50th=[ 3104], 99.90th=[ 3138], 99.95th=[ 3138], 00:28:41.943 | 99.99th=[ 3138] 00:28:41.943 bw ( KiB/s): min=14336, max=135168, per=1.20%, avg=60277.83, stdev=31569.76, samples=18 00:28:41.943 iops : min= 14, max= 132, avg=58.83, stdev=30.84, samples=18 00:28:41.943 lat (msec) : 100=0.30%, 250=1.98%, 500=1.67%, 750=1.67%, 1000=4.72% 00:28:41.943 lat (msec) : 2000=54.64%, >=2000=35.01% 00:28:41.943 cpu : usr=0.08%, sys=0.94%, ctx=1632, majf=0, minf=32769 00:28:41.943 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:28:41.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.943 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.943 issued rwts: total=657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.943 job3: (groupid=0, jobs=1): err= 0: pid=10832: Mon Jun 10 10:56:09 2024 00:28:41.943 read: IOPS=37, BW=37.2MiB/s (39.0MB/s)(375MiB/10081msec) 00:28:41.943 slat (usec): min=30, max=159912, avg=26680.17, stdev=28859.13 00:28:41.943 clat (msec): min=73, max=5633, avg=3050.59, stdev=1948.46 00:28:41.943 lat (msec): min=144, max=5673, avg=3077.27, stdev=1957.46 00:28:41.943 clat percentiles (msec): 00:28:41.943 | 1.00th=[ 146], 5.00th=[ 236], 10.00th=[ 330], 20.00th=[ 506], 00:28:41.944 | 30.00th=[ 1720], 40.00th=[ 2265], 50.00th=[ 3473], 60.00th=[ 4144], 00:28:41.944 | 70.00th=[ 4732], 80.00th=[ 5134], 90.00th=[ 5403], 95.00th=[ 5537], 00:28:41.944 | 99.00th=[ 5604], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:28:41.944 | 99.99th=[ 5604] 00:28:41.944 bw ( KiB/s): min= 8192, max=159744, per=0.67%, avg=33860.27, stdev=36569.17, samples=15 00:28:41.944 iops : min= 8, max= 156, avg=33.07, stdev=35.71, samples=15 00:28:41.944 lat (msec) : 100=0.27%, 250=7.47%, 500=8.53%, 750=7.73%, 1000=1.33% 00:28:41.944 lat (msec) : 2000=11.73%, >=2000=62.93% 00:28:41.944 cpu : usr=0.00%, sys=1.15%, ctx=1805, majf=0, minf=32769 00:28:41.944 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:28:41.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.944 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:28:41.944 issued rwts: total=375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.944 job3: (groupid=0, jobs=1): err= 0: pid=10833: Mon Jun 10 10:56:09 2024 00:28:41.944 read: IOPS=98, BW=98.6MiB/s (103MB/s)(990MiB/10044msec) 00:28:41.944 slat (usec): min=38, max=122847, avg=10099.47, stdev=18510.37 00:28:41.944 clat (msec): min=41, max=2503, avg=1129.45, stdev=643.37 00:28:41.944 lat (msec): min=44, max=2530, avg=1139.55, stdev=647.84 00:28:41.944 clat percentiles (msec): 00:28:41.944 | 1.00th=[ 64], 5.00th=[ 284], 10.00th=[ 550], 20.00th=[ 701], 00:28:41.944 | 30.00th=[ 709], 40.00th=[ 718], 50.00th=[ 776], 60.00th=[ 1083], 00:28:41.944 | 70.00th=[ 1418], 80.00th=[ 1838], 90.00th=[ 2198], 95.00th=[ 2366], 00:28:41.944 | 99.00th=[ 2467], 99.50th=[ 2467], 99.90th=[ 2500], 99.95th=[ 2500], 00:28:41.944 | 99.99th=[ 2500] 00:28:41.944 bw ( KiB/s): min=20480, max=198656, per=1.97%, avg=98944.00, stdev=64272.49, samples=16 00:28:41.944 iops : min= 20, max= 194, avg=96.63, stdev=62.77, samples=16 00:28:41.944 lat (msec) : 50=0.20%, 100=0.91%, 250=3.03%, 500=4.75%, 750=37.27% 00:28:41.944 lat (msec) : 1000=11.31%, 2000=25.76%, >=2000=16.77% 00:28:41.944 cpu : usr=0.03%, sys=1.22%, ctx=1618, majf=0, minf=32769 00:28:41.944 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.6% 00:28:41.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.944 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.944 issued rwts: total=990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.944 job4: (groupid=0, jobs=1): err= 0: pid=10837: Mon Jun 10 10:56:09 2024 00:28:41.944 read: IOPS=89, BW=89.2MiB/s (93.6MB/s)(896MiB/10043msec) 00:28:41.944 slat (usec): min=34, max=87464, avg=11165.50, stdev=16203.31 00:28:41.944 clat (msec): min=35, max=2445, avg=1288.89, stdev=544.98 00:28:41.944 lat (msec): min=44, max=2458, avg=1300.06, stdev=547.16 00:28:41.944 clat percentiles (msec): 00:28:41.944 | 1.00th=[ 113], 5.00th=[ 768], 10.00th=[ 776], 20.00th=[ 802], 00:28:41.944 | 30.00th=[ 877], 40.00th=[ 1020], 50.00th=[ 1116], 60.00th=[ 1284], 00:28:41.944 | 70.00th=[ 1469], 80.00th=[ 1972], 90.00th=[ 2165], 95.00th=[ 2265], 00:28:41.944 | 99.00th=[ 2400], 99.50th=[ 2433], 99.90th=[ 2433], 99.95th=[ 2433], 00:28:41.944 | 99.99th=[ 2433] 00:28:41.944 bw ( KiB/s): min=49152, max=163840, per=1.90%, avg=95232.00, stdev=41502.54, samples=16 00:28:41.944 iops : min= 48, max= 160, avg=93.00, stdev=40.53, samples=16 00:28:41.944 lat (msec) : 50=0.33%, 100=0.56%, 250=0.56%, 500=1.23%, 750=1.67% 00:28:41.944 lat (msec) : 1000=30.47%, 2000=45.54%, >=2000=19.64% 00:28:41.944 cpu : usr=0.02%, sys=1.62%, ctx=1638, majf=0, minf=32769 00:28:41.944 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:28:41.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.944 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.944 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.944 job4: (groupid=0, jobs=1): err= 0: pid=10838: Mon Jun 10 10:56:09 2024 00:28:41.944 read: IOPS=63, BW=63.9MiB/s (67.0MB/s)(645MiB/10097msec) 00:28:41.944 slat (usec): min=37, max=120520, avg=15522.59, stdev=16623.23 00:28:41.944 clat (msec): min=82, max=2906, avg=1763.30, stdev=778.26 00:28:41.944 lat (msec): min=125, max=2917, avg=1778.83, stdev=782.48 00:28:41.944 clat percentiles (msec): 00:28:41.944 | 1.00th=[ 215], 5.00th=[ 313], 10.00th=[ 567], 20.00th=[ 1133], 00:28:41.944 | 30.00th=[ 1334], 40.00th=[ 1418], 50.00th=[ 1670], 60.00th=[ 2299], 00:28:41.944 | 70.00th=[ 2400], 80.00th=[ 2500], 90.00th=[ 2668], 95.00th=[ 2769], 00:28:41.944 | 99.00th=[ 2869], 99.50th=[ 2903], 99.90th=[ 2903], 99.95th=[ 2903], 00:28:41.944 | 99.99th=[ 2903] 00:28:41.944 bw ( KiB/s): min=32768, max=161792, per=1.41%, avg=70579.00, stdev=33882.99, samples=15 00:28:41.944 iops : min= 32, max= 158, avg=68.87, stdev=33.10, samples=15 00:28:41.944 lat (msec) : 100=0.16%, 250=2.64%, 500=7.13%, 750=3.26%, 1000=2.48% 00:28:41.944 lat (msec) : 2000=36.43%, >=2000=47.91% 00:28:41.944 cpu : usr=0.00%, sys=1.19%, ctx=1674, majf=0, minf=32769 00:28:41.944 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:28:41.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.944 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.944 issued rwts: total=645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.944 job4: (groupid=0, jobs=1): err= 0: pid=10839: Mon Jun 10 10:56:09 2024 00:28:41.944 read: IOPS=63, BW=63.1MiB/s (66.1MB/s)(634MiB/10050msec) 00:28:41.944 slat (usec): min=37, max=89249, avg=15771.93, stdev=16147.78 00:28:41.944 clat (msec): min=47, max=2849, avg=1819.66, stdev=647.51 00:28:41.944 lat (msec): min=56, max=2878, avg=1835.43, stdev=648.98 00:28:41.944 clat percentiles (msec): 00:28:41.944 | 1.00th=[ 122], 5.00th=[ 531], 10.00th=[ 927], 20.00th=[ 1217], 00:28:41.944 | 30.00th=[ 1469], 40.00th=[ 1804], 50.00th=[ 1972], 60.00th=[ 2123], 00:28:41.944 | 70.00th=[ 2333], 80.00th=[ 2366], 90.00th=[ 2567], 95.00th=[ 2601], 00:28:41.944 | 99.00th=[ 2769], 99.50th=[ 2836], 99.90th=[ 2836], 99.95th=[ 2836], 00:28:41.944 | 99.99th=[ 2836] 00:28:41.944 bw ( KiB/s): min=20480, max=163840, per=1.29%, avg=64853.33, stdev=31601.33, samples=15 00:28:41.944 iops : min= 20, max= 160, avg=63.33, stdev=30.86, samples=15 00:28:41.944 lat (msec) : 50=0.16%, 100=0.32%, 250=2.68%, 500=1.74%, 750=1.26% 00:28:41.944 lat (msec) : 1000=5.05%, 2000=41.01%, >=2000=47.79% 00:28:41.944 cpu : usr=0.00%, sys=1.04%, ctx=1767, majf=0, minf=32769 00:28:41.944 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:28:41.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.944 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.944 issued rwts: total=634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.944 job4: (groupid=0, jobs=1): err= 0: pid=10840: Mon Jun 10 10:56:09 2024 00:28:41.944 read: IOPS=63, BW=63.0MiB/s (66.1MB/s)(637MiB/10110msec) 00:28:41.944 slat (usec): min=32, max=85701, avg=15719.78, stdev=16629.74 00:28:41.944 clat (msec): min=94, max=2990, avg=1780.71, stdev=645.18 00:28:41.944 lat (msec): min=131, max=3027, avg=1796.43, stdev=645.59 00:28:41.944 clat percentiles (msec): 00:28:41.944 | 1.00th=[ 239], 5.00th=[ 735], 10.00th=[ 1083], 20.00th=[ 1200], 00:28:41.944 | 30.00th=[ 1351], 40.00th=[ 1536], 50.00th=[ 1754], 60.00th=[ 1921], 00:28:41.944 | 70.00th=[ 2198], 80.00th=[ 2467], 90.00th=[ 2668], 95.00th=[ 2769], 00:28:41.944 | 99.00th=[ 2937], 99.50th=[ 2970], 99.90th=[ 3004], 99.95th=[ 3004], 00:28:41.944 | 99.99th=[ 3004] 00:28:41.944 bw ( KiB/s): min= 6144, max=155648, per=1.30%, avg=65152.00, stdev=38096.61, samples=16 00:28:41.944 iops : min= 6, max= 152, avg=63.63, stdev=37.20, samples=16 00:28:41.944 lat (msec) : 100=0.16%, 250=0.94%, 500=1.88%, 750=2.20%, 1000=2.20% 00:28:41.944 lat (msec) : 2000=56.04%, >=2000=36.58% 00:28:41.944 cpu : usr=0.01%, sys=1.16%, ctx=1760, majf=0, minf=32769 00:28:41.944 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:28:41.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.944 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.944 issued rwts: total=637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.944 job4: (groupid=0, jobs=1): err= 0: pid=10841: Mon Jun 10 10:56:09 2024 00:28:41.944 read: IOPS=124, BW=125MiB/s (131MB/s)(1258MiB/10093msec) 00:28:41.944 slat (usec): min=33, max=98230, avg=7947.86, stdev=16654.60 00:28:41.944 clat (msec): min=91, max=1792, avg=927.70, stdev=281.49 00:28:41.944 lat (msec): min=96, max=1799, avg=935.65, stdev=283.46 00:28:41.944 clat percentiles (msec): 00:28:41.944 | 1.00th=[ 186], 5.00th=[ 506], 10.00th=[ 735], 20.00th=[ 760], 00:28:41.944 | 30.00th=[ 785], 40.00th=[ 802], 50.00th=[ 835], 60.00th=[ 902], 00:28:41.944 | 70.00th=[ 1036], 80.00th=[ 1217], 90.00th=[ 1318], 95.00th=[ 1368], 00:28:41.944 | 99.00th=[ 1737], 99.50th=[ 1754], 99.90th=[ 1787], 99.95th=[ 1787], 00:28:41.944 | 99.99th=[ 1787] 00:28:41.944 bw ( KiB/s): min=63488, max=188416, per=2.72%, avg=136252.24, stdev=36672.34, samples=17 00:28:41.944 iops : min= 62, max= 184, avg=133.06, stdev=35.81, samples=17 00:28:41.944 lat (msec) : 100=0.24%, 250=1.91%, 500=2.78%, 750=12.48%, 1000=46.90% 00:28:41.944 lat (msec) : 2000=35.69% 00:28:41.944 cpu : usr=0.03%, sys=1.40%, ctx=1524, majf=0, minf=32769 00:28:41.944 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:28:41.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.944 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.944 issued rwts: total=1258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.944 job4: (groupid=0, jobs=1): err= 0: pid=10842: Mon Jun 10 10:56:09 2024 00:28:41.944 read: IOPS=59, BW=59.8MiB/s (62.7MB/s)(603MiB/10088msec) 00:28:41.944 slat (usec): min=37, max=107666, avg=16581.79, stdev=15329.80 00:28:41.944 clat (msec): min=87, max=2664, avg=1957.67, stdev=591.09 00:28:41.944 lat (msec): min=97, max=2668, avg=1974.26, stdev=591.41 00:28:41.944 clat percentiles (msec): 00:28:41.944 | 1.00th=[ 186], 5.00th=[ 617], 10.00th=[ 1234], 20.00th=[ 1502], 00:28:41.944 | 30.00th=[ 1703], 40.00th=[ 1905], 50.00th=[ 2140], 60.00th=[ 2333], 00:28:41.944 | 70.00th=[ 2400], 80.00th=[ 2433], 90.00th=[ 2534], 95.00th=[ 2601], 00:28:41.944 | 99.00th=[ 2668], 99.50th=[ 2668], 99.90th=[ 2668], 99.95th=[ 2668], 00:28:41.944 | 99.99th=[ 2668] 00:28:41.944 bw ( KiB/s): min= 4096, max=122880, per=1.14%, avg=57344.00, stdev=26207.20, samples=17 00:28:41.945 iops : min= 4, max= 120, avg=56.00, stdev=25.59, samples=17 00:28:41.945 lat (msec) : 100=0.33%, 250=1.49%, 500=1.99%, 750=2.32%, 1000=1.99% 00:28:41.945 lat (msec) : 2000=34.49%, >=2000=57.38% 00:28:41.945 cpu : usr=0.00%, sys=1.19%, ctx=1807, majf=0, minf=32769 00:28:41.945 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.6% 00:28:41.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.945 issued rwts: total=603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.945 job4: (groupid=0, jobs=1): err= 0: pid=10843: Mon Jun 10 10:56:09 2024 00:28:41.945 read: IOPS=72, BW=72.3MiB/s (75.8MB/s)(727MiB/10057msec) 00:28:41.945 slat (usec): min=35, max=88141, avg=13771.96, stdev=15906.01 00:28:41.945 clat (msec): min=41, max=2587, avg=1676.65, stdev=517.09 00:28:41.945 lat (msec): min=57, max=2590, avg=1690.42, stdev=517.02 00:28:41.945 clat percentiles (msec): 00:28:41.945 | 1.00th=[ 148], 5.00th=[ 776], 10.00th=[ 1003], 20.00th=[ 1217], 00:28:41.945 | 30.00th=[ 1435], 40.00th=[ 1552], 50.00th=[ 1770], 60.00th=[ 1888], 00:28:41.945 | 70.00th=[ 1972], 80.00th=[ 2123], 90.00th=[ 2333], 95.00th=[ 2433], 00:28:41.945 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2601], 99.95th=[ 2601], 00:28:41.945 | 99.99th=[ 2601] 00:28:41.945 bw ( KiB/s): min=22528, max=133120, per=1.31%, avg=65536.00, stdev=31874.89, samples=18 00:28:41.945 iops : min= 22, max= 130, avg=64.00, stdev=31.13, samples=18 00:28:41.945 lat (msec) : 50=0.14%, 100=0.69%, 250=0.69%, 500=1.24%, 750=2.06% 00:28:41.945 lat (msec) : 1000=4.81%, 2000=62.59%, >=2000=27.79% 00:28:41.945 cpu : usr=0.02%, sys=1.35%, ctx=1711, majf=0, minf=32769 00:28:41.945 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:28:41.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.945 issued rwts: total=727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.945 job4: (groupid=0, jobs=1): err= 0: pid=10844: Mon Jun 10 10:56:09 2024 00:28:41.945 read: IOPS=55, BW=55.8MiB/s (58.5MB/s)(565MiB/10131msec) 00:28:41.945 slat (usec): min=52, max=98078, avg=17761.59, stdev=15375.53 00:28:41.945 clat (msec): min=93, max=2649, avg=2147.61, stdev=541.86 00:28:41.945 lat (msec): min=191, max=2651, avg=2165.37, stdev=541.60 00:28:41.945 clat percentiles (msec): 00:28:41.945 | 1.00th=[ 309], 5.00th=[ 793], 10.00th=[ 1200], 20.00th=[ 1955], 00:28:41.945 | 30.00th=[ 2198], 40.00th=[ 2333], 50.00th=[ 2400], 60.00th=[ 2433], 00:28:41.945 | 70.00th=[ 2433], 80.00th=[ 2467], 90.00th=[ 2500], 95.00th=[ 2534], 00:28:41.945 | 99.00th=[ 2601], 99.50th=[ 2635], 99.90th=[ 2635], 99.95th=[ 2635], 00:28:41.945 | 99.99th=[ 2635] 00:28:41.945 bw ( KiB/s): min=32768, max=69632, per=1.05%, avg=52645.65, stdev=11443.28, samples=17 00:28:41.945 iops : min= 32, max= 68, avg=51.41, stdev=11.18, samples=17 00:28:41.945 lat (msec) : 100=0.18%, 250=0.53%, 500=1.77%, 750=2.12%, 1000=3.01% 00:28:41.945 lat (msec) : 2000=14.34%, >=2000=78.05% 00:28:41.945 cpu : usr=0.00%, sys=1.23%, ctx=1801, majf=0, minf=32769 00:28:41.945 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:28:41.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.945 issued rwts: total=565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.945 job4: (groupid=0, jobs=1): err= 0: pid=10845: Mon Jun 10 10:56:09 2024 00:28:41.945 read: IOPS=53, BW=53.1MiB/s (55.6MB/s)(533MiB/10046msec) 00:28:41.945 slat (usec): min=32, max=110707, avg=18777.73, stdev=25520.52 00:28:41.945 clat (msec): min=35, max=3604, avg=2249.73, stdev=854.44 00:28:41.945 lat (msec): min=46, max=3677, avg=2268.51, stdev=857.37 00:28:41.945 clat percentiles (msec): 00:28:41.945 | 1.00th=[ 52], 5.00th=[ 575], 10.00th=[ 869], 20.00th=[ 1502], 00:28:41.945 | 30.00th=[ 2039], 40.00th=[ 2265], 50.00th=[ 2400], 60.00th=[ 2500], 00:28:41.945 | 70.00th=[ 2601], 80.00th=[ 3138], 90.00th=[ 3239], 95.00th=[ 3339], 00:28:41.945 | 99.00th=[ 3574], 99.50th=[ 3608], 99.90th=[ 3608], 99.95th=[ 3608], 00:28:41.945 | 99.99th=[ 3608] 00:28:41.945 bw ( KiB/s): min=22528, max=83968, per=0.91%, avg=45899.29, stdev=17196.78, samples=17 00:28:41.945 iops : min= 22, max= 82, avg=44.82, stdev=16.79, samples=17 00:28:41.945 lat (msec) : 50=0.56%, 100=1.31%, 250=0.94%, 500=1.50%, 750=1.31% 00:28:41.945 lat (msec) : 1000=6.19%, 2000=17.26%, >=2000=70.92% 00:28:41.945 cpu : usr=0.02%, sys=0.97%, ctx=1610, majf=0, minf=32769 00:28:41.945 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:28:41.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.945 issued rwts: total=533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.945 job4: (groupid=0, jobs=1): err= 0: pid=10846: Mon Jun 10 10:56:09 2024 00:28:41.945 read: IOPS=54, BW=54.7MiB/s (57.4MB/s)(549MiB/10033msec) 00:28:41.945 slat (usec): min=32, max=151004, avg=18214.87, stdev=17823.89 00:28:41.945 clat (msec): min=31, max=3020, avg=2037.97, stdev=687.45 00:28:41.945 lat (msec): min=35, max=3030, avg=2056.19, stdev=689.28 00:28:41.945 clat percentiles (msec): 00:28:41.945 | 1.00th=[ 96], 5.00th=[ 435], 10.00th=[ 835], 20.00th=[ 1452], 00:28:41.945 | 30.00th=[ 2198], 40.00th=[ 2232], 50.00th=[ 2265], 60.00th=[ 2333], 00:28:41.945 | 70.00th=[ 2400], 80.00th=[ 2500], 90.00th=[ 2534], 95.00th=[ 2769], 00:28:41.945 | 99.00th=[ 3004], 99.50th=[ 3004], 99.90th=[ 3037], 99.95th=[ 3037], 00:28:41.945 | 99.99th=[ 3037] 00:28:41.945 bw ( KiB/s): min= 6144, max=104448, per=1.06%, avg=53248.00, stdev=21701.62, samples=15 00:28:41.945 iops : min= 6, max= 102, avg=52.00, stdev=21.19, samples=15 00:28:41.945 lat (msec) : 50=0.73%, 100=0.36%, 250=2.37%, 500=2.37%, 750=3.28% 00:28:41.945 lat (msec) : 1000=2.19%, 2000=14.75%, >=2000=73.95% 00:28:41.945 cpu : usr=0.05%, sys=0.86%, ctx=1709, majf=0, minf=32769 00:28:41.945 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:28:41.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.945 issued rwts: total=549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.945 job4: (groupid=0, jobs=1): err= 0: pid=10847: Mon Jun 10 10:56:09 2024 00:28:41.945 read: IOPS=51, BW=51.6MiB/s (54.1MB/s)(519MiB/10051msec) 00:28:41.945 slat (usec): min=99, max=110198, avg=19280.60, stdev=17439.47 00:28:41.945 clat (msec): min=41, max=3085, avg=2168.99, stdev=755.30 00:28:41.945 lat (msec): min=54, max=3111, avg=2188.27, stdev=756.10 00:28:41.945 clat percentiles (msec): 00:28:41.945 | 1.00th=[ 64], 5.00th=[ 288], 10.00th=[ 827], 20.00th=[ 1972], 00:28:41.945 | 30.00th=[ 2072], 40.00th=[ 2165], 50.00th=[ 2299], 60.00th=[ 2500], 00:28:41.945 | 70.00th=[ 2635], 80.00th=[ 2735], 90.00th=[ 2937], 95.00th=[ 3037], 00:28:41.945 | 99.00th=[ 3071], 99.50th=[ 3071], 99.90th=[ 3071], 99.95th=[ 3071], 00:28:41.945 | 99.99th=[ 3071] 00:28:41.945 bw ( KiB/s): min=24576, max=79872, per=1.04%, avg=52077.71, stdev=14091.17, samples=14 00:28:41.945 iops : min= 24, max= 78, avg=50.86, stdev=13.76, samples=14 00:28:41.945 lat (msec) : 50=0.19%, 100=1.73%, 250=2.70%, 500=2.12%, 750=2.70% 00:28:41.945 lat (msec) : 1000=1.54%, 2000=12.52%, >=2000=76.49% 00:28:41.945 cpu : usr=0.03%, sys=1.22%, ctx=1708, majf=0, minf=32769 00:28:41.945 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:28:41.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.945 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.945 issued rwts: total=519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.945 job4: (groupid=0, jobs=1): err= 0: pid=10848: Mon Jun 10 10:56:09 2024 00:28:41.945 read: IOPS=52, BW=52.9MiB/s (55.5MB/s)(534MiB/10097msec) 00:28:41.945 slat (usec): min=39, max=117292, avg=18757.14, stdev=19169.93 00:28:41.945 clat (msec): min=78, max=3129, avg=2189.83, stdev=808.38 00:28:41.945 lat (msec): min=160, max=3153, avg=2208.59, stdev=810.48 00:28:41.945 clat percentiles (msec): 00:28:41.945 | 1.00th=[ 165], 5.00th=[ 259], 10.00th=[ 651], 20.00th=[ 1603], 00:28:41.945 | 30.00th=[ 2299], 40.00th=[ 2433], 50.00th=[ 2467], 60.00th=[ 2534], 00:28:41.945 | 70.00th=[ 2635], 80.00th=[ 2735], 90.00th=[ 2903], 95.00th=[ 3037], 00:28:41.945 | 99.00th=[ 3104], 99.50th=[ 3104], 99.90th=[ 3138], 99.95th=[ 3138], 00:28:41.945 | 99.99th=[ 3138] 00:28:41.945 bw ( KiB/s): min=12288, max=102400, per=1.04%, avg=51968.00, stdev=21097.05, samples=16 00:28:41.945 iops : min= 12, max= 100, avg=50.75, stdev=20.60, samples=16 00:28:41.945 lat (msec) : 100=0.19%, 250=2.81%, 500=5.81%, 750=1.87%, 1000=2.06% 00:28:41.945 lat (msec) : 2000=11.80%, >=2000=75.47% 00:28:41.945 cpu : usr=0.00%, sys=1.03%, ctx=1699, majf=0, minf=32769 00:28:41.945 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:28:41.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.945 issued rwts: total=534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.945 job4: (groupid=0, jobs=1): err= 0: pid=10849: Mon Jun 10 10:56:09 2024 00:28:41.945 read: IOPS=56, BW=57.0MiB/s (59.7MB/s)(573MiB/10056msec) 00:28:41.945 slat (usec): min=47, max=93793, avg=17450.84, stdev=16669.68 00:28:41.945 clat (msec): min=54, max=2954, avg=1927.43, stdev=690.39 00:28:41.945 lat (msec): min=87, max=2963, avg=1944.88, stdev=692.11 00:28:41.945 clat percentiles (msec): 00:28:41.945 | 1.00th=[ 102], 5.00th=[ 642], 10.00th=[ 1150], 20.00th=[ 1301], 00:28:41.945 | 30.00th=[ 1502], 40.00th=[ 1737], 50.00th=[ 2039], 60.00th=[ 2198], 00:28:41.945 | 70.00th=[ 2366], 80.00th=[ 2702], 90.00th=[ 2802], 95.00th=[ 2836], 00:28:41.945 | 99.00th=[ 2903], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:28:41.945 | 99.99th=[ 2970] 00:28:41.945 bw ( KiB/s): min=24576, max=139264, per=1.23%, avg=61878.86, stdev=27088.67, samples=14 00:28:41.945 iops : min= 24, max= 136, avg=60.43, stdev=26.45, samples=14 00:28:41.945 lat (msec) : 100=0.87%, 250=0.87%, 500=1.92%, 750=2.27%, 1000=2.62% 00:28:41.945 lat (msec) : 2000=40.49%, >=2000=50.96% 00:28:41.945 cpu : usr=0.03%, sys=0.94%, ctx=1687, majf=0, minf=32769 00:28:41.945 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:28:41.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.945 issued rwts: total=573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.946 job5: (groupid=0, jobs=1): err= 0: pid=10853: Mon Jun 10 10:56:09 2024 00:28:41.946 read: IOPS=59, BW=59.7MiB/s (62.6MB/s)(601MiB/10069msec) 00:28:41.946 slat (usec): min=54, max=139457, avg=16645.25, stdev=22349.60 00:28:41.946 clat (msec): min=63, max=2879, avg=1873.46, stdev=807.04 00:28:41.946 lat (msec): min=141, max=2892, avg=1890.10, stdev=810.80 00:28:41.946 clat percentiles (msec): 00:28:41.946 | 1.00th=[ 150], 5.00th=[ 255], 10.00th=[ 414], 20.00th=[ 927], 00:28:41.946 | 30.00th=[ 1821], 40.00th=[ 2106], 50.00th=[ 2198], 60.00th=[ 2265], 00:28:41.946 | 70.00th=[ 2333], 80.00th=[ 2534], 90.00th=[ 2702], 95.00th=[ 2769], 00:28:41.946 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:28:41.946 | 99.99th=[ 2869] 00:28:41.946 bw ( KiB/s): min=20480, max=114688, per=1.15%, avg=57929.14, stdev=27657.38, samples=14 00:28:41.946 iops : min= 20, max= 112, avg=56.57, stdev=27.01, samples=14 00:28:41.946 lat (msec) : 100=0.17%, 250=4.49%, 500=8.32%, 750=5.66%, 1000=1.50% 00:28:41.946 lat (msec) : 2000=15.47%, >=2000=64.39% 00:28:41.946 cpu : usr=0.02%, sys=0.98%, ctx=1696, majf=0, minf=32769 00:28:41.946 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5% 00:28:41.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.946 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.946 issued rwts: total=601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.946 job5: (groupid=0, jobs=1): err= 0: pid=10854: Mon Jun 10 10:56:09 2024 00:28:41.946 read: IOPS=84, BW=84.2MiB/s (88.3MB/s)(847MiB/10062msec) 00:28:41.946 slat (usec): min=39, max=139600, avg=11807.16, stdev=23646.65 00:28:41.946 clat (msec): min=57, max=2958, avg=1337.17, stdev=635.43 00:28:41.946 lat (msec): min=80, max=2964, avg=1348.98, stdev=638.18 00:28:41.946 clat percentiles (msec): 00:28:41.946 | 1.00th=[ 259], 5.00th=[ 776], 10.00th=[ 802], 20.00th=[ 835], 00:28:41.946 | 30.00th=[ 869], 40.00th=[ 885], 50.00th=[ 969], 60.00th=[ 1418], 00:28:41.946 | 70.00th=[ 1754], 80.00th=[ 1938], 90.00th=[ 2366], 95.00th=[ 2635], 00:28:41.946 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2970], 99.95th=[ 2970], 00:28:41.946 | 99.99th=[ 2970] 00:28:41.946 bw ( KiB/s): min=12288, max=157696, per=1.96%, avg=98304.00, stdev=49582.87, samples=15 00:28:41.946 iops : min= 12, max= 154, avg=96.00, stdev=48.42, samples=15 00:28:41.946 lat (msec) : 100=0.24%, 250=0.71%, 500=0.94%, 750=1.53%, 1000=49.94% 00:28:41.946 lat (msec) : 2000=31.88%, >=2000=14.76% 00:28:41.946 cpu : usr=0.08%, sys=1.04%, ctx=1606, majf=0, minf=32769 00:28:41.946 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:28:41.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.946 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:41.946 issued rwts: total=847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.946 job5: (groupid=0, jobs=1): err= 0: pid=10855: Mon Jun 10 10:56:09 2024 00:28:41.946 read: IOPS=71, BW=71.5MiB/s (75.0MB/s)(719MiB/10051msec) 00:28:41.946 slat (usec): min=40, max=150066, avg=13923.31, stdev=23316.78 00:28:41.946 clat (msec): min=37, max=4093, avg=1683.73, stdev=1076.60 00:28:41.946 lat (msec): min=65, max=4118, avg=1697.66, stdev=1081.83 00:28:41.946 clat percentiles (msec): 00:28:41.946 | 1.00th=[ 81], 5.00th=[ 726], 10.00th=[ 735], 20.00th=[ 768], 00:28:41.946 | 30.00th=[ 827], 40.00th=[ 1083], 50.00th=[ 1200], 60.00th=[ 1536], 00:28:41.946 | 70.00th=[ 2400], 80.00th=[ 2836], 90.00th=[ 3406], 95.00th=[ 3742], 00:28:41.946 | 99.00th=[ 4044], 99.50th=[ 4044], 99.90th=[ 4077], 99.95th=[ 4077], 00:28:41.946 | 99.99th=[ 4077] 00:28:41.946 bw ( KiB/s): min=20480, max=178176, per=1.35%, avg=67704.47, stdev=52770.64, samples=17 00:28:41.946 iops : min= 20, max= 174, avg=66.12, stdev=51.53, samples=17 00:28:41.946 lat (msec) : 50=0.14%, 100=1.81%, 250=1.11%, 500=0.97%, 750=14.60% 00:28:41.946 lat (msec) : 1000=19.19%, 2000=26.70%, >=2000=35.47% 00:28:41.946 cpu : usr=0.00%, sys=1.33%, ctx=1643, majf=0, minf=32769 00:28:41.946 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:28:41.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.946 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.946 issued rwts: total=719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.946 job5: (groupid=0, jobs=1): err= 0: pid=10856: Mon Jun 10 10:56:09 2024 00:28:41.946 read: IOPS=44, BW=44.8MiB/s (47.0MB/s)(452MiB/10078msec) 00:28:41.946 slat (usec): min=381, max=126214, avg=22139.01, stdev=23982.45 00:28:41.946 clat (msec): min=68, max=3122, avg=2397.78, stdev=729.02 00:28:41.946 lat (msec): min=81, max=3143, avg=2419.92, stdev=728.41 00:28:41.946 clat percentiles (msec): 00:28:41.946 | 1.00th=[ 133], 5.00th=[ 542], 10.00th=[ 1099], 20.00th=[ 2232], 00:28:41.946 | 30.00th=[ 2467], 40.00th=[ 2567], 50.00th=[ 2702], 60.00th=[ 2769], 00:28:41.946 | 70.00th=[ 2802], 80.00th=[ 2869], 90.00th=[ 2937], 95.00th=[ 2970], 00:28:41.946 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3138], 99.95th=[ 3138], 00:28:41.946 | 99.99th=[ 3138] 00:28:41.946 bw ( KiB/s): min=34816, max=67584, per=0.95%, avg=47542.86, stdev=9304.97, samples=14 00:28:41.946 iops : min= 34, max= 66, avg=46.43, stdev= 9.09, samples=14 00:28:41.946 lat (msec) : 100=0.44%, 250=1.33%, 500=2.88%, 750=2.21%, 1000=2.21% 00:28:41.946 lat (msec) : 2000=9.07%, >=2000=81.86% 00:28:41.946 cpu : usr=0.00%, sys=0.93%, ctx=1721, majf=0, minf=32769 00:28:41.946 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:28:41.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.946 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.946 issued rwts: total=452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.946 job5: (groupid=0, jobs=1): err= 0: pid=10857: Mon Jun 10 10:56:09 2024 00:28:41.946 read: IOPS=54, BW=54.6MiB/s (57.3MB/s)(551MiB/10085msec) 00:28:41.946 slat (usec): min=30, max=129848, avg=18151.85, stdev=19576.68 00:28:41.946 clat (msec): min=81, max=3023, avg=2059.70, stdev=660.24 00:28:41.946 lat (msec): min=121, max=3025, avg=2077.85, stdev=660.57 00:28:41.946 clat percentiles (msec): 00:28:41.946 | 1.00th=[ 176], 5.00th=[ 642], 10.00th=[ 1133], 20.00th=[ 1603], 00:28:41.946 | 30.00th=[ 1720], 40.00th=[ 1938], 50.00th=[ 2232], 60.00th=[ 2500], 00:28:41.946 | 70.00th=[ 2567], 80.00th=[ 2635], 90.00th=[ 2702], 95.00th=[ 2769], 00:28:41.946 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3037], 99.95th=[ 3037], 00:28:41.946 | 99.99th=[ 3037] 00:28:41.946 bw ( KiB/s): min= 2048, max=118784, per=1.08%, avg=54272.00, stdev=24393.28, samples=16 00:28:41.946 iops : min= 2, max= 116, avg=53.00, stdev=23.82, samples=16 00:28:41.946 lat (msec) : 100=0.18%, 250=2.00%, 500=1.45%, 750=2.54%, 1000=2.54% 00:28:41.946 lat (msec) : 2000=34.48%, >=2000=56.81% 00:28:41.946 cpu : usr=0.02%, sys=0.96%, ctx=1659, majf=0, minf=32769 00:28:41.946 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.6% 00:28:41.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.946 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.946 issued rwts: total=551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.946 job5: (groupid=0, jobs=1): err= 0: pid=10858: Mon Jun 10 10:56:09 2024 00:28:41.946 read: IOPS=60, BW=61.0MiB/s (63.9MB/s)(616MiB/10106msec) 00:28:41.946 slat (usec): min=43, max=133340, avg=16270.39, stdev=22589.54 00:28:41.946 clat (msec): min=80, max=3710, avg=2014.54, stdev=958.06 00:28:41.946 lat (msec): min=153, max=3711, avg=2030.81, stdev=960.75 00:28:41.946 clat percentiles (msec): 00:28:41.946 | 1.00th=[ 451], 5.00th=[ 894], 10.00th=[ 1099], 20.00th=[ 1133], 00:28:41.946 | 30.00th=[ 1217], 40.00th=[ 1318], 50.00th=[ 1418], 60.00th=[ 2433], 00:28:41.946 | 70.00th=[ 2903], 80.00th=[ 3138], 90.00th=[ 3339], 95.00th=[ 3440], 00:28:41.946 | 99.00th=[ 3675], 99.50th=[ 3708], 99.90th=[ 3708], 99.95th=[ 3708], 00:28:41.946 | 99.99th=[ 3708] 00:28:41.946 bw ( KiB/s): min=20480, max=169984, per=1.05%, avg=52597.16, stdev=43419.47, samples=19 00:28:41.946 iops : min= 20, max= 166, avg=51.32, stdev=42.42, samples=19 00:28:41.946 lat (msec) : 100=0.16%, 250=0.32%, 500=0.81%, 750=0.97%, 1000=4.22% 00:28:41.946 lat (msec) : 2000=48.86%, >=2000=44.64% 00:28:41.946 cpu : usr=0.02%, sys=1.53%, ctx=1680, majf=0, minf=32769 00:28:41.946 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:28:41.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.946 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.946 issued rwts: total=616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.946 job5: (groupid=0, jobs=1): err= 0: pid=10859: Mon Jun 10 10:56:09 2024 00:28:41.946 read: IOPS=56, BW=56.8MiB/s (59.6MB/s)(570MiB/10030msec) 00:28:41.946 slat (usec): min=34, max=119840, avg=17548.57, stdev=21479.98 00:28:41.946 clat (msec): min=24, max=2955, avg=2030.08, stdev=708.04 00:28:41.946 lat (msec): min=30, max=2988, avg=2047.63, stdev=708.12 00:28:41.946 clat percentiles (msec): 00:28:41.947 | 1.00th=[ 44], 5.00th=[ 435], 10.00th=[ 995], 20.00th=[ 1435], 00:28:41.947 | 30.00th=[ 1770], 40.00th=[ 2123], 50.00th=[ 2265], 60.00th=[ 2400], 00:28:41.947 | 70.00th=[ 2467], 80.00th=[ 2601], 90.00th=[ 2702], 95.00th=[ 2869], 00:28:41.947 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:28:41.947 | 99.99th=[ 2970] 00:28:41.947 bw ( KiB/s): min=12288, max=143360, per=1.12%, avg=56115.20, stdev=29105.49, samples=15 00:28:41.947 iops : min= 12, max= 140, avg=54.80, stdev=28.42, samples=15 00:28:41.947 lat (msec) : 50=1.05%, 100=1.23%, 250=1.23%, 500=1.93%, 750=2.11% 00:28:41.947 lat (msec) : 1000=2.46%, 2000=26.67%, >=2000=63.33% 00:28:41.947 cpu : usr=0.01%, sys=1.00%, ctx=1629, majf=0, minf=32769 00:28:41.947 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:28:41.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.947 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.947 issued rwts: total=570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.947 job5: (groupid=0, jobs=1): err= 0: pid=10860: Mon Jun 10 10:56:09 2024 00:28:41.947 read: IOPS=29, BW=29.2MiB/s (30.6MB/s)(294MiB/10071msec) 00:28:41.947 slat (usec): min=29, max=159966, avg=34106.84, stdev=37816.24 00:28:41.947 clat (msec): min=41, max=6098, avg=3751.01, stdev=1552.80 00:28:41.947 lat (msec): min=71, max=6101, avg=3785.11, stdev=1550.24 00:28:41.947 clat percentiles (msec): 00:28:41.947 | 1.00th=[ 84], 5.00th=[ 1217], 10.00th=[ 1636], 20.00th=[ 1804], 00:28:41.947 | 30.00th=[ 2903], 40.00th=[ 3406], 50.00th=[ 4044], 60.00th=[ 4396], 00:28:41.947 | 70.00th=[ 5000], 80.00th=[ 5336], 90.00th=[ 5537], 95.00th=[ 5805], 00:28:41.947 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:28:41.947 | 99.99th=[ 6074] 00:28:41.947 bw ( KiB/s): min= 8192, max=96256, per=0.46%, avg=23262.00, stdev=21845.49, samples=14 00:28:41.947 iops : min= 8, max= 94, avg=22.71, stdev=21.33, samples=14 00:28:41.947 lat (msec) : 50=0.34%, 100=0.68%, 250=0.68%, 500=1.02%, 750=1.02% 00:28:41.947 lat (msec) : 1000=0.68%, 2000=17.35%, >=2000=78.23% 00:28:41.947 cpu : usr=0.00%, sys=0.82%, ctx=1572, majf=0, minf=32769 00:28:41.947 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.9%, >=64=78.6% 00:28:41.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.947 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:28:41.947 issued rwts: total=294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.947 job5: (groupid=0, jobs=1): err= 0: pid=10861: Mon Jun 10 10:56:09 2024 00:28:41.947 read: IOPS=44, BW=44.0MiB/s (46.2MB/s)(443MiB/10057msec) 00:28:41.947 slat (usec): min=45, max=155280, avg=22604.43, stdev=22490.92 00:28:41.947 clat (msec): min=41, max=3978, avg=2522.42, stdev=996.64 00:28:41.947 lat (msec): min=56, max=4021, avg=2545.03, stdev=999.00 00:28:41.947 clat percentiles (msec): 00:28:41.947 | 1.00th=[ 78], 5.00th=[ 464], 10.00th=[ 885], 20.00th=[ 1636], 00:28:41.947 | 30.00th=[ 2500], 40.00th=[ 2601], 50.00th=[ 2635], 60.00th=[ 2735], 00:28:41.947 | 70.00th=[ 3037], 80.00th=[ 3473], 90.00th=[ 3708], 95.00th=[ 3809], 00:28:41.947 | 99.00th=[ 3910], 99.50th=[ 3943], 99.90th=[ 3977], 99.95th=[ 3977], 00:28:41.947 | 99.99th=[ 3977] 00:28:41.947 bw ( KiB/s): min=14336, max=67584, per=0.85%, avg=42422.86, stdev=16333.28, samples=14 00:28:41.947 iops : min= 14, max= 66, avg=41.43, stdev=15.95, samples=14 00:28:41.947 lat (msec) : 50=0.23%, 100=1.58%, 250=1.35%, 500=2.48%, 750=2.93% 00:28:41.947 lat (msec) : 1000=2.71%, 2000=12.64%, >=2000=76.07% 00:28:41.947 cpu : usr=0.04%, sys=0.77%, ctx=1627, majf=0, minf=32769 00:28:41.947 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:28:41.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.947 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.947 issued rwts: total=443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.947 job5: (groupid=0, jobs=1): err= 0: pid=10862: Mon Jun 10 10:56:09 2024 00:28:41.947 read: IOPS=72, BW=72.6MiB/s (76.1MB/s)(732MiB/10085msec) 00:28:41.947 slat (usec): min=28, max=127573, avg=13660.00, stdev=23354.85 00:28:41.947 clat (msec): min=83, max=3027, avg=1598.32, stdev=796.86 00:28:41.947 lat (msec): min=117, max=3045, avg=1611.98, stdev=799.67 00:28:41.947 clat percentiles (msec): 00:28:41.947 | 1.00th=[ 284], 5.00th=[ 718], 10.00th=[ 852], 20.00th=[ 885], 00:28:41.947 | 30.00th=[ 919], 40.00th=[ 986], 50.00th=[ 1318], 60.00th=[ 1804], 00:28:41.947 | 70.00th=[ 2232], 80.00th=[ 2567], 90.00th=[ 2735], 95.00th=[ 2836], 00:28:41.947 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3037], 99.95th=[ 3037], 00:28:41.947 | 99.99th=[ 3037] 00:28:41.947 bw ( KiB/s): min=18432, max=157696, per=1.54%, avg=77420.31, stdev=46576.90, samples=16 00:28:41.947 iops : min= 18, max= 154, avg=75.56, stdev=45.41, samples=16 00:28:41.947 lat (msec) : 100=0.14%, 250=0.55%, 500=2.32%, 750=2.73%, 1000=34.43% 00:28:41.947 lat (msec) : 2000=24.86%, >=2000=34.97% 00:28:41.947 cpu : usr=0.00%, sys=1.10%, ctx=1522, majf=0, minf=32769 00:28:41.947 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:28:41.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.947 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.947 issued rwts: total=732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.947 job5: (groupid=0, jobs=1): err= 0: pid=10863: Mon Jun 10 10:56:09 2024 00:28:41.947 read: IOPS=60, BW=60.2MiB/s (63.1MB/s)(604MiB/10037msec) 00:28:41.947 slat (usec): min=29, max=139570, avg=16560.49, stdev=19142.72 00:28:41.947 clat (msec): min=32, max=3244, avg=1820.58, stdev=701.29 00:28:41.947 lat (msec): min=36, max=3244, avg=1837.14, stdev=702.49 00:28:41.947 clat percentiles (msec): 00:28:41.947 | 1.00th=[ 78], 5.00th=[ 600], 10.00th=[ 877], 20.00th=[ 1116], 00:28:41.947 | 30.00th=[ 1502], 40.00th=[ 1787], 50.00th=[ 1972], 60.00th=[ 2072], 00:28:41.947 | 70.00th=[ 2198], 80.00th=[ 2265], 90.00th=[ 2802], 95.00th=[ 2937], 00:28:41.947 | 99.00th=[ 3171], 99.50th=[ 3239], 99.90th=[ 3239], 99.95th=[ 3239], 00:28:41.947 | 99.99th=[ 3239] 00:28:41.947 bw ( KiB/s): min=12288, max=210944, per=1.31%, avg=65828.57, stdev=45892.20, samples=14 00:28:41.947 iops : min= 12, max= 206, avg=64.29, stdev=44.82, samples=14 00:28:41.947 lat (msec) : 50=0.50%, 100=0.99%, 250=1.16%, 500=1.66%, 750=2.15% 00:28:41.947 lat (msec) : 1000=9.44%, 2000=38.08%, >=2000=46.03% 00:28:41.947 cpu : usr=0.03%, sys=0.95%, ctx=1643, majf=0, minf=32769 00:28:41.947 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:28:41.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.947 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.947 issued rwts: total=604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.947 job5: (groupid=0, jobs=1): err= 0: pid=10864: Mon Jun 10 10:56:09 2024 00:28:41.947 read: IOPS=47, BW=47.9MiB/s (50.2MB/s)(483MiB/10082msec) 00:28:41.947 slat (usec): min=88, max=107818, avg=20738.16, stdev=23469.70 00:28:41.947 clat (msec): min=63, max=3504, avg=2364.27, stdev=866.13 00:28:41.947 lat (msec): min=96, max=3512, avg=2385.00, stdev=867.74 00:28:41.947 clat percentiles (msec): 00:28:41.947 | 1.00th=[ 134], 5.00th=[ 409], 10.00th=[ 927], 20.00th=[ 1938], 00:28:41.947 | 30.00th=[ 2165], 40.00th=[ 2232], 50.00th=[ 2467], 60.00th=[ 2735], 00:28:41.947 | 70.00th=[ 2970], 80.00th=[ 3171], 90.00th=[ 3306], 95.00th=[ 3406], 00:28:41.947 | 99.00th=[ 3473], 99.50th=[ 3473], 99.90th=[ 3507], 99.95th=[ 3507], 00:28:41.947 | 99.99th=[ 3507] 00:28:41.947 bw ( KiB/s): min= 2048, max=94208, per=0.91%, avg=45436.38, stdev=19131.01, samples=16 00:28:41.947 iops : min= 2, max= 92, avg=44.31, stdev=18.74, samples=16 00:28:41.947 lat (msec) : 100=0.41%, 250=3.11%, 500=2.69%, 750=1.86%, 1000=2.69% 00:28:41.947 lat (msec) : 2000=9.52%, >=2000=79.71% 00:28:41.947 cpu : usr=0.01%, sys=0.91%, ctx=1621, majf=0, minf=32769 00:28:41.947 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=87.0% 00:28:41.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.947 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:28:41.947 issued rwts: total=483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.947 job5: (groupid=0, jobs=1): err= 0: pid=10865: Mon Jun 10 10:56:09 2024 00:28:41.947 read: IOPS=57, BW=57.6MiB/s (60.4MB/s)(581MiB/10080msec) 00:28:41.947 slat (usec): min=33, max=120271, avg=17212.96, stdev=21619.78 00:28:41.947 clat (msec): min=77, max=3057, avg=2075.99, stdev=607.33 00:28:41.947 lat (msec): min=100, max=3084, avg=2093.20, stdev=607.77 00:28:41.947 clat percentiles (msec): 00:28:41.947 | 1.00th=[ 222], 5.00th=[ 506], 10.00th=[ 1183], 20.00th=[ 1838], 00:28:41.947 | 30.00th=[ 1972], 40.00th=[ 2072], 50.00th=[ 2165], 60.00th=[ 2333], 00:28:41.947 | 70.00th=[ 2400], 80.00th=[ 2500], 90.00th=[ 2601], 95.00th=[ 2869], 00:28:41.947 | 99.00th=[ 3004], 99.50th=[ 3037], 99.90th=[ 3071], 99.95th=[ 3071], 00:28:41.947 | 99.99th=[ 3071] 00:28:41.947 bw ( KiB/s): min=24576, max=83968, per=1.09%, avg=54693.65, stdev=16138.39, samples=17 00:28:41.947 iops : min= 24, max= 82, avg=53.41, stdev=15.76, samples=17 00:28:41.947 lat (msec) : 100=0.17%, 250=0.86%, 500=3.79%, 750=1.89%, 1000=1.89% 00:28:41.947 lat (msec) : 2000=23.41%, >=2000=67.99% 00:28:41.947 cpu : usr=0.02%, sys=1.07%, ctx=1612, majf=0, minf=32769 00:28:41.947 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.2% 00:28:41.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.947 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:28:41.947 issued rwts: total=581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:41.947 00:28:41.947 Run status group 0 (all jobs): 00:28:41.947 READ: bw=4899MiB/s (5137MB/s), 29.2MiB/s-158MiB/s (30.6MB/s-166MB/s), io=48.6GiB (52.1GB), run=10030-10151msec 00:28:41.947 00:28:41.947 Disk stats (read/write): 00:28:41.947 nvme0n1: ios=64051/0, merge=0/0, ticks=5957493/0, in_queue=5957493, util=98.50% 00:28:41.947 nvme2n1: ios=61023/0, merge=0/0, ticks=5748792/0, in_queue=5748792, util=98.72% 00:28:41.947 nvme3n1: ios=74771/0, merge=0/0, ticks=6876029/0, in_queue=6876029, util=98.76% 00:28:41.947 nvme4n1: ios=65168/0, merge=0/0, ticks=6001319/0, in_queue=6001319, util=98.96% 00:28:41.947 nvme5n1: ios=68313/0, merge=0/0, ticks=6399041/0, in_queue=6399041, util=98.98% 00:28:41.947 nvme6n1: ios=58986/0, merge=0/0, ticks=5630486/0, in_queue=5630486, util=98.93% 00:28:41.947 10:56:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:28:41.947 10:56:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:28:41.948 10:56:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:41.948 10:56:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:28:41.948 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000000 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000000 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:41.948 10:56:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:42.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:42.515 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:28:42.515 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:28:42.515 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:42.515 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000001 00:28:42.515 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:42.515 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000001 00:28:42.515 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:28:42.516 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:42.516 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.516 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:42.516 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.516 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:42.516 10:56:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:43.451 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000002 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000002 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:43.451 10:56:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:44.387 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:44.387 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:28:44.387 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:28:44.387 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000003 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000003 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:44.388 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:44.955 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000004 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000004 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.955 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:28:44.956 10:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:45.892 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000005 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000005 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:45.892 rmmod nvme_rdma 00:28:45.892 rmmod nvme_fabrics 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 10046 ']' 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 10046 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@949 -- # '[' -z 10046 ']' 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # kill -0 10046 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # uname 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:45.892 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 10046 00:28:46.151 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:46.151 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:46.151 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # echo 'killing process with pid 10046' 00:28:46.151 killing process with pid 10046 00:28:46.151 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # kill 10046 00:28:46.151 10:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # wait 10046 00:28:46.410 10:56:15 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:46.410 10:56:15 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:46.410 00:28:46.410 real 0m24.988s 00:28:46.410 user 1m24.337s 00:28:46.410 sys 0m14.978s 00:28:46.410 10:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:46.410 10:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:28:46.410 ************************************ 00:28:46.410 END TEST nvmf_srq_overwhelm 00:28:46.410 ************************************ 00:28:46.410 10:56:15 nvmf_rdma -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:46.411 10:56:15 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:46.411 10:56:15 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:46.411 10:56:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:46.411 ************************************ 00:28:46.411 START TEST nvmf_shutdown 00:28:46.411 ************************************ 00:28:46.411 10:56:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:28:46.411 * Looking for test storage... 00:28:46.411 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:28:46.411 10:56:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.411 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:46.411 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.411 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.411 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.411 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.411 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.670 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.670 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.670 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.670 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.670 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.670 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:46.671 ************************************ 00:28:46.671 START TEST nvmf_shutdown_tc1 00:28:46.671 ************************************ 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.671 10:56:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:53.240 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:53.241 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:53.241 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # modinfo irdma 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:53.241 Found net devices under 0000:af:00.0: cvl_0_0 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:53.241 Found net devices under 0000:af:00.1: cvl_0_1 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:28:53.241 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:53.241 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:28:53.241 altname enp175s0f0np0 00:28:53.241 altname ens801f0np0 00:28:53.241 inet 192.168.100.8/24 scope global cvl_0_0 00:28:53.241 valid_lft forever preferred_lft forever 00:28:53.241 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:28:53.241 valid_lft forever preferred_lft forever 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:28:53.241 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:53.241 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:28:53.241 altname enp175s0f1np1 00:28:53.241 altname ens801f1np1 00:28:53.241 inet 192.168.100.9/24 scope global cvl_0_1 00:28:53.241 valid_lft forever preferred_lft forever 00:28:53.241 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:28:53.241 valid_lft forever preferred_lft forever 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:53.241 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:53.242 192.168.100.9' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:53.242 192.168.100.9' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:53.242 192.168.100.9' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=16517 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 16517 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 16517 ']' 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:53.242 10:56:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.242 [2024-06-10 10:56:21.464117] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:53.242 [2024-06-10 10:56:21.464161] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.242 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.242 [2024-06-10 10:56:21.524719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.242 [2024-06-10 10:56:21.608259] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.242 [2024-06-10 10:56:21.608296] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.242 [2024-06-10 10:56:21.608304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.242 [2024-06-10 10:56:21.608310] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.242 [2024-06-10 10:56:21.608315] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.242 [2024-06-10 10:56:21.608420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.242 [2024-06-10 10:56:21.608504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.242 [2024-06-10 10:56:21.608609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:53.242 [2024-06-10 10:56:21.608610] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.501 [2024-06-10 10:56:22.337088] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x6fbbe0/0x6fb220) succeed. 00:28:53.501 [2024-06-10 10:56:22.345886] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x6fcf90/0x6fb7a0) succeed. 00:28:53.501 [2024-06-10 10:56:22.345905] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.501 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.501 Malloc1 00:28:53.501 [2024-06-10 10:56:22.444785] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:53.501 Malloc2 00:28:53.501 Malloc3 00:28:53.823 Malloc4 00:28:53.823 Malloc5 00:28:53.823 Malloc6 00:28:53.823 Malloc7 00:28:53.823 Malloc8 00:28:53.823 Malloc9 00:28:53.823 Malloc10 00:28:53.823 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.823 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:53.823 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:53.823 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=16811 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 16811 /var/tmp/bdevperf.sock 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 16811 ']' 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 [2024-06-10 10:56:22.913049] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:54.081 [2024-06-10 10:56:22.913093] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:54.081 { 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme$subsystem", 00:28:54.081 "trtype": "$TEST_TRANSPORT", 00:28:54.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "$NVMF_PORT", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.081 "hdgst": ${hdgst:-false}, 00:28:54.081 "ddgst": ${ddgst:-false} 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 } 00:28:54.081 EOF 00:28:54.081 )") 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:54.081 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:54.081 10:56:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme1", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme2", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme3", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme4", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme5", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme6", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme7", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme8", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme9", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 },{ 00:28:54.081 "params": { 00:28:54.081 "name": "Nvme10", 00:28:54.081 "trtype": "rdma", 00:28:54.081 "traddr": "192.168.100.8", 00:28:54.081 "adrfam": "ipv4", 00:28:54.081 "trsvcid": "4420", 00:28:54.081 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:54.081 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:54.081 "hdgst": false, 00:28:54.081 "ddgst": false 00:28:54.081 }, 00:28:54.081 "method": "bdev_nvme_attach_controller" 00:28:54.081 }' 00:28:54.081 [2024-06-10 10:56:22.974394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.081 [2024-06-10 10:56:23.046562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 16811 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:55.015 10:56:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:55.950 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 16811 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:55.950 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 16517 00:28:55.950 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:55.950 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:55.950 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:55.950 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:55.950 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.950 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.950 { 00:28:55.950 "params": { 00:28:55.950 "name": "Nvme$subsystem", 00:28:55.950 "trtype": "$TEST_TRANSPORT", 00:28:55.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.950 "adrfam": "ipv4", 00:28:55.950 "trsvcid": "$NVMF_PORT", 00:28:55.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.950 "hdgst": ${hdgst:-false}, 00:28:55.950 "ddgst": ${ddgst:-false} 00:28:55.950 }, 00:28:55.950 "method": "bdev_nvme_attach_controller" 00:28:55.950 } 00:28:55.950 EOF 00:28:55.950 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 [2024-06-10 10:56:24.945443] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:55.951 [2024-06-10 10:56:24.945488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17175 ] 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.951 { 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme$subsystem", 00:28:55.951 "trtype": "$TEST_TRANSPORT", 00:28:55.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "$NVMF_PORT", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.951 "hdgst": ${hdgst:-false}, 00:28:55.951 "ddgst": ${ddgst:-false} 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 } 00:28:55.951 EOF 00:28:55.951 )") 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:55.951 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.951 10:56:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme1", 00:28:55.951 "trtype": "rdma", 00:28:55.951 "traddr": "192.168.100.8", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "4420", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.951 "hdgst": false, 00:28:55.951 "ddgst": false 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 },{ 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme2", 00:28:55.951 "trtype": "rdma", 00:28:55.951 "traddr": "192.168.100.8", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "4420", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:55.951 "hdgst": false, 00:28:55.951 "ddgst": false 00:28:55.951 }, 00:28:55.951 "method": "bdev_nvme_attach_controller" 00:28:55.951 },{ 00:28:55.951 "params": { 00:28:55.951 "name": "Nvme3", 00:28:55.951 "trtype": "rdma", 00:28:55.951 "traddr": "192.168.100.8", 00:28:55.951 "adrfam": "ipv4", 00:28:55.951 "trsvcid": "4420", 00:28:55.951 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:55.951 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:55.951 "hdgst": false, 00:28:55.951 "ddgst": false 00:28:55.951 }, 00:28:55.952 "method": "bdev_nvme_attach_controller" 00:28:55.952 },{ 00:28:55.952 "params": { 00:28:55.952 "name": "Nvme4", 00:28:55.952 "trtype": "rdma", 00:28:55.952 "traddr": "192.168.100.8", 00:28:55.952 "adrfam": "ipv4", 00:28:55.952 "trsvcid": "4420", 00:28:55.952 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:55.952 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:55.952 "hdgst": false, 00:28:55.952 "ddgst": false 00:28:55.952 }, 00:28:55.952 "method": "bdev_nvme_attach_controller" 00:28:55.952 },{ 00:28:55.952 "params": { 00:28:55.952 "name": "Nvme5", 00:28:55.952 "trtype": "rdma", 00:28:55.952 "traddr": "192.168.100.8", 00:28:55.952 "adrfam": "ipv4", 00:28:55.952 "trsvcid": "4420", 00:28:55.952 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:55.952 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:55.952 "hdgst": false, 00:28:55.952 "ddgst": false 00:28:55.952 }, 00:28:55.952 "method": "bdev_nvme_attach_controller" 00:28:55.952 },{ 00:28:55.952 "params": { 00:28:55.952 "name": "Nvme6", 00:28:55.952 "trtype": "rdma", 00:28:55.952 "traddr": "192.168.100.8", 00:28:55.952 "adrfam": "ipv4", 00:28:55.952 "trsvcid": "4420", 00:28:55.952 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:55.952 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:55.952 "hdgst": false, 00:28:55.952 "ddgst": false 00:28:55.952 }, 00:28:55.952 "method": "bdev_nvme_attach_controller" 00:28:55.952 },{ 00:28:55.952 "params": { 00:28:55.952 "name": "Nvme7", 00:28:55.952 "trtype": "rdma", 00:28:55.952 "traddr": "192.168.100.8", 00:28:55.952 "adrfam": "ipv4", 00:28:55.952 "trsvcid": "4420", 00:28:55.952 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:55.952 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:55.952 "hdgst": false, 00:28:55.952 "ddgst": false 00:28:55.952 }, 00:28:55.952 "method": "bdev_nvme_attach_controller" 00:28:55.952 },{ 00:28:55.952 "params": { 00:28:55.952 "name": "Nvme8", 00:28:55.952 "trtype": "rdma", 00:28:55.952 "traddr": "192.168.100.8", 00:28:55.952 "adrfam": "ipv4", 00:28:55.952 "trsvcid": "4420", 00:28:55.952 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:55.952 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:55.952 "hdgst": false, 00:28:55.952 "ddgst": false 00:28:55.952 }, 00:28:55.952 "method": "bdev_nvme_attach_controller" 00:28:55.952 },{ 00:28:55.952 "params": { 00:28:55.952 "name": "Nvme9", 00:28:55.952 "trtype": "rdma", 00:28:55.952 "traddr": "192.168.100.8", 00:28:55.952 "adrfam": "ipv4", 00:28:55.952 "trsvcid": "4420", 00:28:55.952 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:55.952 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:55.952 "hdgst": false, 00:28:55.952 "ddgst": false 00:28:55.952 }, 00:28:55.952 "method": "bdev_nvme_attach_controller" 00:28:55.952 },{ 00:28:55.952 "params": { 00:28:55.952 "name": "Nvme10", 00:28:55.952 "trtype": "rdma", 00:28:55.952 "traddr": "192.168.100.8", 00:28:55.952 "adrfam": "ipv4", 00:28:55.952 "trsvcid": "4420", 00:28:55.952 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:55.952 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:55.952 "hdgst": false, 00:28:55.952 "ddgst": false 00:28:55.952 }, 00:28:55.952 "method": "bdev_nvme_attach_controller" 00:28:55.952 }' 00:28:56.210 [2024-06-10 10:56:25.006847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.210 [2024-06-10 10:56:25.079446] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.147 Running I/O for 1 seconds... 00:28:58.084 00:28:58.084 Latency(us) 00:28:58.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.084 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme1n1 : 1.07 358.43 22.40 0.00 0.00 176968.01 65910.49 176759.95 00:28:58.084 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme2n1 : 1.17 383.57 23.97 0.00 0.00 163130.31 8925.38 160781.65 00:28:58.084 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme3n1 : 1.17 384.20 24.01 0.00 0.00 159283.48 8800.55 143804.71 00:28:58.084 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme4n1 : 1.16 385.91 24.12 0.00 0.00 157443.27 22719.15 126827.76 00:28:58.084 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme5n1 : 1.17 382.95 23.93 0.00 0.00 155895.01 7208.96 146800.64 00:28:58.084 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme6n1 : 1.17 382.56 23.91 0.00 0.00 153719.61 7489.83 138811.49 00:28:58.084 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme7n1 : 1.17 382.18 23.89 0.00 0.00 151591.43 7801.90 130822.34 00:28:58.084 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme8n1 : 1.17 381.77 23.86 0.00 0.00 149526.64 8113.98 122833.19 00:28:58.084 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme9n1 : 1.18 381.23 23.83 0.00 0.00 147891.23 8925.38 113346.07 00:28:58.084 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:58.084 Verification LBA range: start 0x0 length 0x400 00:28:58.084 Nvme10n1 : 1.18 326.31 20.39 0.00 0.00 170096.56 8738.13 249660.95 00:28:58.084 =================================================================================================================== 00:28:58.084 Total : 3749.11 234.32 0.00 0.00 158114.30 7208.96 249660.95 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.343 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:58.602 rmmod nvme_rdma 00:28:58.602 rmmod nvme_fabrics 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 16517 ']' 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 16517 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 16517 ']' 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 16517 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 16517 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 16517' 00:28:58.602 killing process with pid 16517 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 16517 00:28:58.602 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 16517 00:28:58.861 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:58.861 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:58.861 00:28:58.861 real 0m12.357s 00:28:58.861 user 0m29.767s 00:28:58.861 sys 0m5.523s 00:28:58.861 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:58.861 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:58.861 ************************************ 00:28:58.861 END TEST nvmf_shutdown_tc1 00:28:58.861 ************************************ 00:28:58.861 10:56:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:58.861 10:56:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:58.861 10:56:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:58.861 10:56:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:59.121 ************************************ 00:28:59.121 START TEST nvmf_shutdown_tc2 00:28:59.121 ************************************ 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.121 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:59.122 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:59.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # modinfo irdma 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:59.122 Found net devices under 0000:af:00.0: cvl_0_0 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:59.122 Found net devices under 0000:af:00.1: cvl_0_1 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:59.122 10:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:28:59.122 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:59.122 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:28:59.122 altname enp175s0f0np0 00:28:59.122 altname ens801f0np0 00:28:59.122 inet 192.168.100.8/24 scope global cvl_0_0 00:28:59.122 valid_lft forever preferred_lft forever 00:28:59.122 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:28:59.122 valid_lft forever preferred_lft forever 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:28:59.122 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:59.122 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:28:59.122 altname enp175s0f1np1 00:28:59.122 altname ens801f1np1 00:28:59.122 inet 192.168.100.9/24 scope global cvl_0_1 00:28:59.122 valid_lft forever preferred_lft forever 00:28:59.122 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:28:59.122 valid_lft forever preferred_lft forever 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:59.122 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:59.123 192.168.100.9' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:59.123 192.168.100.9' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:59.123 192.168.100.9' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:59.123 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=17735 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 17735 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 17735 ']' 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:59.383 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.383 [2024-06-10 10:56:28.195494] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:59.383 [2024-06-10 10:56:28.195541] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.383 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.383 [2024-06-10 10:56:28.257674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.383 [2024-06-10 10:56:28.330047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.383 [2024-06-10 10:56:28.330087] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.383 [2024-06-10 10:56:28.330094] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.383 [2024-06-10 10:56:28.330100] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.383 [2024-06-10 10:56:28.330105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.383 [2024-06-10 10:56:28.330211] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.383 [2024-06-10 10:56:28.330279] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.383 [2024-06-10 10:56:28.330364] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.383 [2024-06-10 10:56:28.330365] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:00.321 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:00.321 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:00.321 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:00.321 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:00.321 10:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.321 [2024-06-10 10:56:29.048344] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x104cbe0/0x104c220) succeed. 00:29:00.321 [2024-06-10 10:56:29.057245] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x104df90/0x104c7a0) succeed. 00:29:00.321 [2024-06-10 10:56:29.057267] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.321 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.321 Malloc1 00:29:00.321 [2024-06-10 10:56:29.156128] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:00.321 Malloc2 00:29:00.321 Malloc3 00:29:00.321 Malloc4 00:29:00.321 Malloc5 00:29:00.321 Malloc6 00:29:00.580 Malloc7 00:29:00.580 Malloc8 00:29:00.580 Malloc9 00:29:00.580 Malloc10 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=18008 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 18008 /var/tmp/bdevperf.sock 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 18008 ']' 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.580 { 00:29:00.580 "params": { 00:29:00.580 "name": "Nvme$subsystem", 00:29:00.580 "trtype": "$TEST_TRANSPORT", 00:29:00.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.580 "adrfam": "ipv4", 00:29:00.580 "trsvcid": "$NVMF_PORT", 00:29:00.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.580 "hdgst": ${hdgst:-false}, 00:29:00.580 "ddgst": ${ddgst:-false} 00:29:00.580 }, 00:29:00.580 "method": "bdev_nvme_attach_controller" 00:29:00.580 } 00:29:00.580 EOF 00:29:00.580 )") 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.580 { 00:29:00.580 "params": { 00:29:00.580 "name": "Nvme$subsystem", 00:29:00.580 "trtype": "$TEST_TRANSPORT", 00:29:00.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.580 "adrfam": "ipv4", 00:29:00.580 "trsvcid": "$NVMF_PORT", 00:29:00.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.580 "hdgst": ${hdgst:-false}, 00:29:00.580 "ddgst": ${ddgst:-false} 00:29:00.580 }, 00:29:00.580 "method": "bdev_nvme_attach_controller" 00:29:00.580 } 00:29:00.580 EOF 00:29:00.580 )") 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.580 { 00:29:00.580 "params": { 00:29:00.580 "name": "Nvme$subsystem", 00:29:00.580 "trtype": "$TEST_TRANSPORT", 00:29:00.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.580 "adrfam": "ipv4", 00:29:00.580 "trsvcid": "$NVMF_PORT", 00:29:00.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.580 "hdgst": ${hdgst:-false}, 00:29:00.580 "ddgst": ${ddgst:-false} 00:29:00.580 }, 00:29:00.580 "method": "bdev_nvme_attach_controller" 00:29:00.580 } 00:29:00.580 EOF 00:29:00.580 )") 00:29:00.580 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.581 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.581 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.581 { 00:29:00.581 "params": { 00:29:00.581 "name": "Nvme$subsystem", 00:29:00.581 "trtype": "$TEST_TRANSPORT", 00:29:00.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.581 "adrfam": "ipv4", 00:29:00.581 "trsvcid": "$NVMF_PORT", 00:29:00.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.581 "hdgst": ${hdgst:-false}, 00:29:00.581 "ddgst": ${ddgst:-false} 00:29:00.581 }, 00:29:00.581 "method": "bdev_nvme_attach_controller" 00:29:00.581 } 00:29:00.581 EOF 00:29:00.581 )") 00:29:00.581 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.840 { 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme$subsystem", 00:29:00.840 "trtype": "$TEST_TRANSPORT", 00:29:00.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "$NVMF_PORT", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.840 "hdgst": ${hdgst:-false}, 00:29:00.840 "ddgst": ${ddgst:-false} 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 } 00:29:00.840 EOF 00:29:00.840 )") 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.840 { 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme$subsystem", 00:29:00.840 "trtype": "$TEST_TRANSPORT", 00:29:00.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "$NVMF_PORT", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.840 "hdgst": ${hdgst:-false}, 00:29:00.840 "ddgst": ${ddgst:-false} 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 } 00:29:00.840 EOF 00:29:00.840 )") 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.840 { 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme$subsystem", 00:29:00.840 "trtype": "$TEST_TRANSPORT", 00:29:00.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "$NVMF_PORT", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.840 "hdgst": ${hdgst:-false}, 00:29:00.840 "ddgst": ${ddgst:-false} 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 } 00:29:00.840 EOF 00:29:00.840 )") 00:29:00.840 [2024-06-10 10:56:29.625688] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:00.840 [2024-06-10 10:56:29.625733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18008 ] 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.840 { 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme$subsystem", 00:29:00.840 "trtype": "$TEST_TRANSPORT", 00:29:00.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "$NVMF_PORT", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.840 "hdgst": ${hdgst:-false}, 00:29:00.840 "ddgst": ${ddgst:-false} 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 } 00:29:00.840 EOF 00:29:00.840 )") 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.840 { 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme$subsystem", 00:29:00.840 "trtype": "$TEST_TRANSPORT", 00:29:00.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "$NVMF_PORT", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.840 "hdgst": ${hdgst:-false}, 00:29:00.840 "ddgst": ${ddgst:-false} 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 } 00:29:00.840 EOF 00:29:00.840 )") 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.840 { 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme$subsystem", 00:29:00.840 "trtype": "$TEST_TRANSPORT", 00:29:00.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "$NVMF_PORT", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.840 "hdgst": ${hdgst:-false}, 00:29:00.840 "ddgst": ${ddgst:-false} 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 } 00:29:00.840 EOF 00:29:00.840 )") 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:00.840 10:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme1", 00:29:00.840 "trtype": "rdma", 00:29:00.840 "traddr": "192.168.100.8", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "4420", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.840 "hdgst": false, 00:29:00.840 "ddgst": false 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 },{ 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme2", 00:29:00.840 "trtype": "rdma", 00:29:00.840 "traddr": "192.168.100.8", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "4420", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:00.840 "hdgst": false, 00:29:00.840 "ddgst": false 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 },{ 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme3", 00:29:00.840 "trtype": "rdma", 00:29:00.840 "traddr": "192.168.100.8", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "4420", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:00.840 "hdgst": false, 00:29:00.840 "ddgst": false 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 },{ 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme4", 00:29:00.840 "trtype": "rdma", 00:29:00.840 "traddr": "192.168.100.8", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "4420", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:00.840 "hdgst": false, 00:29:00.840 "ddgst": false 00:29:00.840 }, 00:29:00.840 "method": "bdev_nvme_attach_controller" 00:29:00.840 },{ 00:29:00.840 "params": { 00:29:00.840 "name": "Nvme5", 00:29:00.840 "trtype": "rdma", 00:29:00.840 "traddr": "192.168.100.8", 00:29:00.840 "adrfam": "ipv4", 00:29:00.840 "trsvcid": "4420", 00:29:00.840 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:00.840 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:00.840 "hdgst": false, 00:29:00.841 "ddgst": false 00:29:00.841 }, 00:29:00.841 "method": "bdev_nvme_attach_controller" 00:29:00.841 },{ 00:29:00.841 "params": { 00:29:00.841 "name": "Nvme6", 00:29:00.841 "trtype": "rdma", 00:29:00.841 "traddr": "192.168.100.8", 00:29:00.841 "adrfam": "ipv4", 00:29:00.841 "trsvcid": "4420", 00:29:00.841 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:00.841 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:00.841 "hdgst": false, 00:29:00.841 "ddgst": false 00:29:00.841 }, 00:29:00.841 "method": "bdev_nvme_attach_controller" 00:29:00.841 },{ 00:29:00.841 "params": { 00:29:00.841 "name": "Nvme7", 00:29:00.841 "trtype": "rdma", 00:29:00.841 "traddr": "192.168.100.8", 00:29:00.841 "adrfam": "ipv4", 00:29:00.841 "trsvcid": "4420", 00:29:00.841 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:00.841 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:00.841 "hdgst": false, 00:29:00.841 "ddgst": false 00:29:00.841 }, 00:29:00.841 "method": "bdev_nvme_attach_controller" 00:29:00.841 },{ 00:29:00.841 "params": { 00:29:00.841 "name": "Nvme8", 00:29:00.841 "trtype": "rdma", 00:29:00.841 "traddr": "192.168.100.8", 00:29:00.841 "adrfam": "ipv4", 00:29:00.841 "trsvcid": "4420", 00:29:00.841 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:00.841 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:00.841 "hdgst": false, 00:29:00.841 "ddgst": false 00:29:00.841 }, 00:29:00.841 "method": "bdev_nvme_attach_controller" 00:29:00.841 },{ 00:29:00.841 "params": { 00:29:00.841 "name": "Nvme9", 00:29:00.841 "trtype": "rdma", 00:29:00.841 "traddr": "192.168.100.8", 00:29:00.841 "adrfam": "ipv4", 00:29:00.841 "trsvcid": "4420", 00:29:00.841 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:00.841 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:00.841 "hdgst": false, 00:29:00.841 "ddgst": false 00:29:00.841 }, 00:29:00.841 "method": "bdev_nvme_attach_controller" 00:29:00.841 },{ 00:29:00.841 "params": { 00:29:00.841 "name": "Nvme10", 00:29:00.841 "trtype": "rdma", 00:29:00.841 "traddr": "192.168.100.8", 00:29:00.841 "adrfam": "ipv4", 00:29:00.841 "trsvcid": "4420", 00:29:00.841 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:00.841 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:00.841 "hdgst": false, 00:29:00.841 "ddgst": false 00:29:00.841 }, 00:29:00.841 "method": "bdev_nvme_attach_controller" 00:29:00.841 }' 00:29:00.841 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.841 [2024-06-10 10:56:29.687580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.841 [2024-06-10 10:56:29.759585] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.777 Running I/O for 10 seconds... 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.777 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.036 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.036 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:02.036 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:02.036 10:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 18008 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 18008 ']' 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 18008 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:02.294 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 18008 00:29:02.553 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:02.553 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:02.553 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 18008' 00:29:02.553 killing process with pid 18008 00:29:02.553 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 18008 00:29:02.553 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 18008 00:29:02.553 Received shutdown signal, test time was about 0.761805 seconds 00:29:02.553 00:29:02.553 Latency(us) 00:29:02.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.553 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme1n1 : 0.74 344.88 21.56 0.00 0.00 180417.10 64412.53 164776.23 00:29:02.553 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme2n1 : 0.74 344.03 21.50 0.00 0.00 177111.53 49183.21 152792.50 00:29:02.553 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme3n1 : 0.75 424.01 26.50 0.00 0.00 140751.43 5679.79 135815.56 00:29:02.553 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme4n1 : 0.76 423.09 26.44 0.00 0.00 138046.90 9799.19 118838.61 00:29:02.553 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme5n1 : 0.76 405.14 25.32 0.00 0.00 140605.17 10485.76 140808.78 00:29:02.553 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme6n1 : 0.76 371.63 23.23 0.00 0.00 148936.48 10673.01 144803.35 00:29:02.553 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme7n1 : 0.76 377.66 23.60 0.00 0.00 143383.57 10860.25 136814.20 00:29:02.553 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme8n1 : 0.76 382.32 23.90 0.00 0.00 138473.14 9674.36 123831.83 00:29:02.553 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme9n1 : 0.75 340.78 21.30 0.00 0.00 152406.55 10236.10 118339.29 00:29:02.553 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.553 Verification LBA range: start 0x0 length 0x400 00:29:02.553 Nvme10n1 : 0.75 255.00 15.94 0.00 0.00 198418.37 11546.82 254654.17 00:29:02.553 =================================================================================================================== 00:29:02.553 Total : 3668.54 229.28 0.00 0.00 153403.63 5679.79 254654.17 00:29:02.812 10:56:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 17735 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:03.746 rmmod nvme_rdma 00:29:03.746 rmmod nvme_fabrics 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 17735 ']' 00:29:03.746 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 17735 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 17735 ']' 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 17735 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 17735 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 17735' 00:29:03.747 killing process with pid 17735 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 17735 00:29:03.747 10:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 17735 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:04.315 00:29:04.315 real 0m5.232s 00:29:04.315 user 0m21.329s 00:29:04.315 sys 0m1.090s 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.315 ************************************ 00:29:04.315 END TEST nvmf_shutdown_tc2 00:29:04.315 ************************************ 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.315 ************************************ 00:29:04.315 START TEST nvmf_shutdown_tc3 00:29:04.315 ************************************ 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:04.315 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:04.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:29:04.315 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # modinfo irdma 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:04.316 Found net devices under 0000:af:00.0: cvl_0_0 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:04.316 Found net devices under 0000:af:00.1: cvl_0_1 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:04.316 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:29:04.575 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:04.575 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:29:04.575 altname enp175s0f0np0 00:29:04.575 altname ens801f0np0 00:29:04.575 inet 192.168.100.8/24 scope global cvl_0_0 00:29:04.575 valid_lft forever preferred_lft forever 00:29:04.575 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:29:04.575 valid_lft forever preferred_lft forever 00:29:04.575 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:04.575 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:29:04.575 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:29:04.576 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:04.576 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:29:04.576 altname enp175s0f1np1 00:29:04.576 altname ens801f1np1 00:29:04.576 inet 192.168.100.9/24 scope global cvl_0_1 00:29:04.576 valid_lft forever preferred_lft forever 00:29:04.576 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:29:04.576 valid_lft forever preferred_lft forever 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:04.576 192.168.100.9' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:04.576 192.168.100.9' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:04.576 192.168.100.9' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=18807 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 18807 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 18807 ']' 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:04.576 10:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.576 [2024-06-10 10:56:33.490187] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:04.576 [2024-06-10 10:56:33.490232] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.576 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.576 [2024-06-10 10:56:33.549682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.835 [2024-06-10 10:56:33.627526] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.835 [2024-06-10 10:56:33.627561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.835 [2024-06-10 10:56:33.627568] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.835 [2024-06-10 10:56:33.627574] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.835 [2024-06-10 10:56:33.627579] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.835 [2024-06-10 10:56:33.627692] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.835 [2024-06-10 10:56:33.627796] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.835 [2024-06-10 10:56:33.627904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.835 [2024-06-10 10:56:33.627905] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.403 [2024-06-10 10:56:34.348854] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1905be0/0x1905220) succeed. 00:29:05.403 [2024-06-10 10:56:34.357697] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1906f90/0x19057a0) succeed. 00:29:05.403 [2024-06-10 10:56:34.357719] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.403 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.662 Malloc1 00:29:05.662 [2024-06-10 10:56:34.456806] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:05.662 Malloc2 00:29:05.662 Malloc3 00:29:05.662 Malloc4 00:29:05.662 Malloc5 00:29:05.662 Malloc6 00:29:05.662 Malloc7 00:29:05.921 Malloc8 00:29:05.921 Malloc9 00:29:05.921 Malloc10 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=19078 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 19078 /var/tmp/bdevperf.sock 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 19078 ']' 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:05.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.921 { 00:29:05.921 "params": { 00:29:05.921 "name": "Nvme$subsystem", 00:29:05.921 "trtype": "$TEST_TRANSPORT", 00:29:05.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.921 "adrfam": "ipv4", 00:29:05.921 "trsvcid": "$NVMF_PORT", 00:29:05.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.921 "hdgst": ${hdgst:-false}, 00:29:05.921 "ddgst": ${ddgst:-false} 00:29:05.921 }, 00:29:05.921 "method": "bdev_nvme_attach_controller" 00:29:05.921 } 00:29:05.921 EOF 00:29:05.921 )") 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.921 { 00:29:05.921 "params": { 00:29:05.921 "name": "Nvme$subsystem", 00:29:05.921 "trtype": "$TEST_TRANSPORT", 00:29:05.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.921 "adrfam": "ipv4", 00:29:05.921 "trsvcid": "$NVMF_PORT", 00:29:05.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.921 "hdgst": ${hdgst:-false}, 00:29:05.921 "ddgst": ${ddgst:-false} 00:29:05.921 }, 00:29:05.921 "method": "bdev_nvme_attach_controller" 00:29:05.921 } 00:29:05.921 EOF 00:29:05.921 )") 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.921 { 00:29:05.921 "params": { 00:29:05.921 "name": "Nvme$subsystem", 00:29:05.921 "trtype": "$TEST_TRANSPORT", 00:29:05.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.921 "adrfam": "ipv4", 00:29:05.921 "trsvcid": "$NVMF_PORT", 00:29:05.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.921 "hdgst": ${hdgst:-false}, 00:29:05.921 "ddgst": ${ddgst:-false} 00:29:05.921 }, 00:29:05.921 "method": "bdev_nvme_attach_controller" 00:29:05.921 } 00:29:05.921 EOF 00:29:05.921 )") 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:05.921 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.922 { 00:29:05.922 "params": { 00:29:05.922 "name": "Nvme$subsystem", 00:29:05.922 "trtype": "$TEST_TRANSPORT", 00:29:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.922 "adrfam": "ipv4", 00:29:05.922 "trsvcid": "$NVMF_PORT", 00:29:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.922 "hdgst": ${hdgst:-false}, 00:29:05.922 "ddgst": ${ddgst:-false} 00:29:05.922 }, 00:29:05.922 "method": "bdev_nvme_attach_controller" 00:29:05.922 } 00:29:05.922 EOF 00:29:05.922 )") 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.922 { 00:29:05.922 "params": { 00:29:05.922 "name": "Nvme$subsystem", 00:29:05.922 "trtype": "$TEST_TRANSPORT", 00:29:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.922 "adrfam": "ipv4", 00:29:05.922 "trsvcid": "$NVMF_PORT", 00:29:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.922 "hdgst": ${hdgst:-false}, 00:29:05.922 "ddgst": ${ddgst:-false} 00:29:05.922 }, 00:29:05.922 "method": "bdev_nvme_attach_controller" 00:29:05.922 } 00:29:05.922 EOF 00:29:05.922 )") 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.922 { 00:29:05.922 "params": { 00:29:05.922 "name": "Nvme$subsystem", 00:29:05.922 "trtype": "$TEST_TRANSPORT", 00:29:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.922 "adrfam": "ipv4", 00:29:05.922 "trsvcid": "$NVMF_PORT", 00:29:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.922 "hdgst": ${hdgst:-false}, 00:29:05.922 "ddgst": ${ddgst:-false} 00:29:05.922 }, 00:29:05.922 "method": "bdev_nvme_attach_controller" 00:29:05.922 } 00:29:05.922 EOF 00:29:05.922 )") 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:05.922 [2024-06-10 10:56:34.930779] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:05.922 [2024-06-10 10:56:34.930824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19078 ] 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.922 { 00:29:05.922 "params": { 00:29:05.922 "name": "Nvme$subsystem", 00:29:05.922 "trtype": "$TEST_TRANSPORT", 00:29:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.922 "adrfam": "ipv4", 00:29:05.922 "trsvcid": "$NVMF_PORT", 00:29:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.922 "hdgst": ${hdgst:-false}, 00:29:05.922 "ddgst": ${ddgst:-false} 00:29:05.922 }, 00:29:05.922 "method": "bdev_nvme_attach_controller" 00:29:05.922 } 00:29:05.922 EOF 00:29:05.922 )") 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.922 { 00:29:05.922 "params": { 00:29:05.922 "name": "Nvme$subsystem", 00:29:05.922 "trtype": "$TEST_TRANSPORT", 00:29:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.922 "adrfam": "ipv4", 00:29:05.922 "trsvcid": "$NVMF_PORT", 00:29:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.922 "hdgst": ${hdgst:-false}, 00:29:05.922 "ddgst": ${ddgst:-false} 00:29:05.922 }, 00:29:05.922 "method": "bdev_nvme_attach_controller" 00:29:05.922 } 00:29:05.922 EOF 00:29:05.922 )") 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.922 { 00:29:05.922 "params": { 00:29:05.922 "name": "Nvme$subsystem", 00:29:05.922 "trtype": "$TEST_TRANSPORT", 00:29:05.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.922 "adrfam": "ipv4", 00:29:05.922 "trsvcid": "$NVMF_PORT", 00:29:05.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.922 "hdgst": ${hdgst:-false}, 00:29:05.922 "ddgst": ${ddgst:-false} 00:29:05.922 }, 00:29:05.922 "method": "bdev_nvme_attach_controller" 00:29:05.922 } 00:29:05.922 EOF 00:29:05.922 )") 00:29:05.922 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:06.181 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.181 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.181 { 00:29:06.181 "params": { 00:29:06.181 "name": "Nvme$subsystem", 00:29:06.181 "trtype": "$TEST_TRANSPORT", 00:29:06.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.181 "adrfam": "ipv4", 00:29:06.181 "trsvcid": "$NVMF_PORT", 00:29:06.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.181 "hdgst": ${hdgst:-false}, 00:29:06.181 "ddgst": ${ddgst:-false} 00:29:06.181 }, 00:29:06.181 "method": "bdev_nvme_attach_controller" 00:29:06.181 } 00:29:06.181 EOF 00:29:06.181 )") 00:29:06.181 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:06.181 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:06.181 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.181 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:06.181 10:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:06.181 "params": { 00:29:06.181 "name": "Nvme1", 00:29:06.181 "trtype": "rdma", 00:29:06.181 "traddr": "192.168.100.8", 00:29:06.181 "adrfam": "ipv4", 00:29:06.181 "trsvcid": "4420", 00:29:06.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.181 "hdgst": false, 00:29:06.181 "ddgst": false 00:29:06.181 }, 00:29:06.181 "method": "bdev_nvme_attach_controller" 00:29:06.181 },{ 00:29:06.181 "params": { 00:29:06.181 "name": "Nvme2", 00:29:06.181 "trtype": "rdma", 00:29:06.181 "traddr": "192.168.100.8", 00:29:06.181 "adrfam": "ipv4", 00:29:06.181 "trsvcid": "4420", 00:29:06.181 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.181 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:06.181 "hdgst": false, 00:29:06.181 "ddgst": false 00:29:06.181 }, 00:29:06.181 "method": "bdev_nvme_attach_controller" 00:29:06.181 },{ 00:29:06.181 "params": { 00:29:06.181 "name": "Nvme3", 00:29:06.181 "trtype": "rdma", 00:29:06.181 "traddr": "192.168.100.8", 00:29:06.181 "adrfam": "ipv4", 00:29:06.181 "trsvcid": "4420", 00:29:06.181 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:06.181 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:06.181 "hdgst": false, 00:29:06.181 "ddgst": false 00:29:06.181 }, 00:29:06.181 "method": "bdev_nvme_attach_controller" 00:29:06.181 },{ 00:29:06.181 "params": { 00:29:06.181 "name": "Nvme4", 00:29:06.181 "trtype": "rdma", 00:29:06.181 "traddr": "192.168.100.8", 00:29:06.181 "adrfam": "ipv4", 00:29:06.181 "trsvcid": "4420", 00:29:06.181 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:06.181 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:06.181 "hdgst": false, 00:29:06.181 "ddgst": false 00:29:06.181 }, 00:29:06.181 "method": "bdev_nvme_attach_controller" 00:29:06.181 },{ 00:29:06.181 "params": { 00:29:06.181 "name": "Nvme5", 00:29:06.181 "trtype": "rdma", 00:29:06.181 "traddr": "192.168.100.8", 00:29:06.181 "adrfam": "ipv4", 00:29:06.181 "trsvcid": "4420", 00:29:06.181 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:06.181 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:06.181 "hdgst": false, 00:29:06.181 "ddgst": false 00:29:06.181 }, 00:29:06.181 "method": "bdev_nvme_attach_controller" 00:29:06.181 },{ 00:29:06.181 "params": { 00:29:06.181 "name": "Nvme6", 00:29:06.181 "trtype": "rdma", 00:29:06.181 "traddr": "192.168.100.8", 00:29:06.181 "adrfam": "ipv4", 00:29:06.181 "trsvcid": "4420", 00:29:06.181 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:06.181 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:06.181 "hdgst": false, 00:29:06.181 "ddgst": false 00:29:06.181 }, 00:29:06.181 "method": "bdev_nvme_attach_controller" 00:29:06.181 },{ 00:29:06.181 "params": { 00:29:06.181 "name": "Nvme7", 00:29:06.181 "trtype": "rdma", 00:29:06.181 "traddr": "192.168.100.8", 00:29:06.181 "adrfam": "ipv4", 00:29:06.181 "trsvcid": "4420", 00:29:06.181 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:06.181 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:06.182 "hdgst": false, 00:29:06.182 "ddgst": false 00:29:06.182 }, 00:29:06.182 "method": "bdev_nvme_attach_controller" 00:29:06.182 },{ 00:29:06.182 "params": { 00:29:06.182 "name": "Nvme8", 00:29:06.182 "trtype": "rdma", 00:29:06.182 "traddr": "192.168.100.8", 00:29:06.182 "adrfam": "ipv4", 00:29:06.182 "trsvcid": "4420", 00:29:06.182 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:06.182 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:06.182 "hdgst": false, 00:29:06.182 "ddgst": false 00:29:06.182 }, 00:29:06.182 "method": "bdev_nvme_attach_controller" 00:29:06.182 },{ 00:29:06.182 "params": { 00:29:06.182 "name": "Nvme9", 00:29:06.182 "trtype": "rdma", 00:29:06.182 "traddr": "192.168.100.8", 00:29:06.182 "adrfam": "ipv4", 00:29:06.182 "trsvcid": "4420", 00:29:06.182 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:06.182 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:06.182 "hdgst": false, 00:29:06.182 "ddgst": false 00:29:06.182 }, 00:29:06.182 "method": "bdev_nvme_attach_controller" 00:29:06.182 },{ 00:29:06.182 "params": { 00:29:06.182 "name": "Nvme10", 00:29:06.182 "trtype": "rdma", 00:29:06.182 "traddr": "192.168.100.8", 00:29:06.182 "adrfam": "ipv4", 00:29:06.182 "trsvcid": "4420", 00:29:06.182 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:06.182 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:06.182 "hdgst": false, 00:29:06.182 "ddgst": false 00:29:06.182 }, 00:29:06.182 "method": "bdev_nvme_attach_controller" 00:29:06.182 }' 00:29:06.182 [2024-06-10 10:56:34.994018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.182 [2024-06-10 10:56:35.066296] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.119 Running I/O for 10 seconds... 00:29:07.119 10:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:07.119 10:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:29:07.119 10:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:07.119 10:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.119 10:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.119 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.119 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:07.119 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:07.119 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.120 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.379 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.379 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:07.379 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:07.379 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 18807 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 18807 ']' 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 18807 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:07.638 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 18807 00:29:07.897 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:07.897 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:07.897 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 18807' 00:29:07.897 killing process with pid 18807 00:29:07.897 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 18807 00:29:07.897 10:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 18807 00:29:08.156 10:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:08.156 10:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:08.421 [2024-06-10 10:56:37.312055] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.421 [2024-06-10 10:56:37.312516] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:29:08.421 [2024-06-10 10:56:37.312539] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.421 [2024-06-10 10:56:37.312728] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:29:08.421 [2024-06-10 10:56:37.312741] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.421 [2024-06-10 10:56:37.312922] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:29:08.421 [2024-06-10 10:56:37.312934] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.421 [2024-06-10 10:56:37.313125] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:29:08.421 [2024-06-10 10:56:37.313143] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.421 [2024-06-10 10:56:37.313326] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:29:08.421 [2024-06-10 10:56:37.313338] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.421 [2024-06-10 10:56:37.313517] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:29:08.421 [2024-06-10 10:56:37.313528] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.421 [2024-06-10 10:56:37.313539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0xb6035205 00:29:08.421 [2024-06-10 10:56:37.313548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0xb6035205 00:29:08.421 [2024-06-10 10:56:37.313573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0xb6035205 00:29:08.421 [2024-06-10 10:56:37.313592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0xb6035205 00:29:08.421 [2024-06-10 10:56:37.313608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x9b565043 00:29:08.421 [2024-06-10 10:56:37.313797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.421 [2024-06-10 10:56:37.313806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.313989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.313995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x9b565043 00:29:08.422 [2024-06-10 10:56:37.314010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x8fe3fb87 00:29:08.422 [2024-06-10 10:56:37.314024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9bd000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f99c000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f97b000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f95a000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f939000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b5000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f894000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f873000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f852000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f831000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f810000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4c6000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a5000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b484000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x9613cd21 00:29:08.422 [2024-06-10 10:56:37.314335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.422 [2024-06-10 10:56:37.314343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7ff000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7de000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7bd000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b79c000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77b000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b739000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b718000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x9613cd21 00:29:08.423 [2024-06-10 10:56:37.314497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d510 sqhd:6940 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314817] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:29:08.423 [2024-06-10 10:56:37.314832] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.423 [2024-06-10 10:56:37.314840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.314987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.314993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.315008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.315023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.315037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.315052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.315067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.315083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x9dca1444 00:29:08.423 [2024-06-10 10:56:37.315098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0xa155f3d9 00:29:08.423 [2024-06-10 10:56:37.315112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0xa155f3d9 00:29:08.423 [2024-06-10 10:56:37.315128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0xa155f3d9 00:29:08.423 [2024-06-10 10:56:37.315142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0xa155f3d9 00:29:08.423 [2024-06-10 10:56:37.315167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0xa155f3d9 00:29:08.423 [2024-06-10 10:56:37.315181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0xa155f3d9 00:29:08.423 [2024-06-10 10:56:37.315194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0xa155f3d9 00:29:08.423 [2024-06-10 10:56:37.315208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.423 [2024-06-10 10:56:37.315215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0xa155f3d9 00:29:08.424 [2024-06-10 10:56:37.315222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0xa155f3d9 00:29:08.424 [2024-06-10 10:56:37.315237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0xa155f3d9 00:29:08.424 [2024-06-10 10:56:37.315252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0xa155f3d9 00:29:08.424 [2024-06-10 10:56:37.315266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0xa155f3d9 00:29:08.424 [2024-06-10 10:56:37.315280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0xa155f3d9 00:29:08.424 [2024-06-10 10:56:37.315294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0xa155f3d9 00:29:08.424 [2024-06-10 10:56:37.315308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0xc2175189 00:29:08.424 [2024-06-10 10:56:37.315323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fddd000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdbc000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd9b000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd7a000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd59000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcd5000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcb4000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc93000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc72000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc51000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc30000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba0f000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9ee000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9cd000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9ac000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b98b000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b96a000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b949000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b928000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b907000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8e6000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.315715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.315722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x9613cd21 00:29:08.424 [2024-06-10 10:56:37.323213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.424 [2024-06-10 10:56:37.323230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x9613cd21 00:29:08.425 [2024-06-10 10:56:37.323238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x9613cd21 00:29:08.425 [2024-06-10 10:56:37.323257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x9613cd21 00:29:08.425 [2024-06-10 10:56:37.323275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b820000 len:0x10000 key:0x9613cd21 00:29:08.425 [2024-06-10 10:56:37.323294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8d7f0 sqhd:66c0 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323634] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:29:08.425 [2024-06-10 10:56:37.323653] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.425 [2024-06-10 10:56:37.323663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.323982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.323992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.324000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.324011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.324019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.324029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.324037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.324047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.324055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.324065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.324073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.425 [2024-06-10 10:56:37.324083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0xf21cbd30 00:29:08.425 [2024-06-10 10:56:37.324091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0xf21cbd30 00:29:08.426 [2024-06-10 10:56:37.324109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0xf21cbd30 00:29:08.426 [2024-06-10 10:56:37.324127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0xf21cbd30 00:29:08.426 [2024-06-10 10:56:37.324145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0xf21cbd30 00:29:08.426 [2024-06-10 10:56:37.324163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0xf21cbd30 00:29:08.426 [2024-06-10 10:56:37.324182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0xf21cbd30 00:29:08.426 [2024-06-10 10:56:37.324200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0xf21cbd30 00:29:08.426 [2024-06-10 10:56:37.324218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0xf21cbd30 00:29:08.426 [2024-06-10 10:56:37.324236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.426 [2024-06-10 10:56:37.324732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x5fb822c7 00:29:08.426 [2024-06-10 10:56:37.324739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.324749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x5fb822c7 00:29:08.427 [2024-06-10 10:56:37.324758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.324768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x5fb822c7 00:29:08.427 [2024-06-10 10:56:37.324775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.324787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x5fb822c7 00:29:08.427 [2024-06-10 10:56:37.324796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.324806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.324814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.324823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0xa155f3d9 00:29:08.427 [2024-06-10 10:56:37.324831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8dad0 sqhd:6440 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325161] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:29:08.427 [2024-06-10 10:56:37.325178] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.427 [2024-06-10 10:56:37.325187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x3e7ef77d 00:29:08.427 [2024-06-10 10:56:37.325422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.427 [2024-06-10 10:56:37.325719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x2b7998cb 00:29:08.427 [2024-06-10 10:56:37.325729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.325738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.325746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.325756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.325764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.325774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.325782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.325792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.325799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.325809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.325817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.325827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.325834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.325844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.325852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.325862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.325870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.332491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.332518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.332546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.332570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.332592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x2b7998cb 00:29:08.428 [2024-06-10 10:56:37.332616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.332986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.332997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.333009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.333019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.333032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.333042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.333055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x9e809bb 00:29:08.428 [2024-06-10 10:56:37.333067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.333080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x3e7ef77d 00:29:08.428 [2024-06-10 10:56:37.333090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:3c8ddb0 sqhd:61c0 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.333803] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:29:08.428 [2024-06-10 10:56:37.333886] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.428 [2024-06-10 10:56:37.333903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.428 [2024-06-10 10:56:37.333913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2bc61e0 sqhd:2900 p:0 m:0 dnr:0 00:29:08.428 [2024-06-10 10:56:37.333924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.333934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2bc61e0 sqhd:2900 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.333944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.333967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2bc61e0 sqhd:2900 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.333979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.333989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2bc61e0 sqhd:2900 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.429 [2024-06-10 10:56:37.334283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:08.429 [2024-06-10 10:56:37.334293] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.429 [2024-06-10 10:56:37.334310] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.429 [2024-06-10 10:56:37.334322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2c7a050 sqhd:6340 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2c7a050 sqhd:6340 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2c7a050 sqhd:6340 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2c7a050 sqhd:6340 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.429 [2024-06-10 10:56:37.334566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:08.429 [2024-06-10 10:56:37.334574] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.429 [2024-06-10 10:56:37.334590] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.429 [2024-06-10 10:56:37.334601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:286f610 sqhd:5300 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:286f610 sqhd:5300 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:286f610 sqhd:5300 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:286f610 sqhd:5300 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.429 [2024-06-10 10:56:37.334836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:08.429 [2024-06-10 10:56:37.334845] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.429 [2024-06-10 10:56:37.334861] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.429 [2024-06-10 10:56:37.334872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:23cf100 sqhd:9c80 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:23cf100 sqhd:9c80 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:23cf100 sqhd:9c80 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.334934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.334944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:23cf100 sqhd:9c80 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.429 [2024-06-10 10:56:37.335128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:08.429 [2024-06-10 10:56:37.335138] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.429 [2024-06-10 10:56:37.335153] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.429 [2024-06-10 10:56:37.335165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2bbee80 sqhd:d040 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2bbee80 sqhd:d040 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2bbee80 sqhd:d040 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:2bbee80 sqhd:d040 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.429 [2024-06-10 10:56:37.335398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.429 [2024-06-10 10:56:37.335406] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.429 [2024-06-10 10:56:37.335422] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.429 [2024-06-10 10:56:37.335434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d3a0 sqhd:5040 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d3a0 sqhd:5040 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d3a0 sqhd:5040 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d3a0 sqhd:5040 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.429 [2024-06-10 10:56:37.335664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:08.429 [2024-06-10 10:56:37.335673] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.429 [2024-06-10 10:56:37.335690] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.429 [2024-06-10 10:56:37.335702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d680 sqhd:8500 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d680 sqhd:8500 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d680 sqhd:8500 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.429 [2024-06-10 10:56:37.335774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d680 sqhd:8500 p:0 m:0 dnr:0 00:29:08.429 [2024-06-10 10:56:37.335919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.429 [2024-06-10 10:56:37.335931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:08.429 [2024-06-10 10:56:37.335940] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.430 [2024-06-10 10:56:37.335962] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.430 [2024-06-10 10:56:37.335975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.430 [2024-06-10 10:56:37.335984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d960 sqhd:f1c0 p:0 m:0 dnr:0 00:29:08.430 [2024-06-10 10:56:37.335995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.430 [2024-06-10 10:56:37.336005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d960 sqhd:f1c0 p:0 m:0 dnr:0 00:29:08.430 [2024-06-10 10:56:37.336017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.430 [2024-06-10 10:56:37.336026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d960 sqhd:f1c0 p:0 m:0 dnr:0 00:29:08.430 [2024-06-10 10:56:37.336037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.430 [2024-06-10 10:56:37.336047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8d960 sqhd:f1c0 p:0 m:0 dnr:0 00:29:08.430 [2024-06-10 10:56:37.336194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.430 [2024-06-10 10:56:37.336207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:08.430 [2024-06-10 10:56:37.336215] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.430 [2024-06-10 10:56:37.336230] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.430 [2024-06-10 10:56:37.336244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.430 [2024-06-10 10:56:37.336254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3d24060 sqhd:b1c0 p:0 m:0 dnr:0 00:29:08.431 [2024-06-10 10:56:37.336265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.431 [2024-06-10 10:56:37.336275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3d24060 sqhd:b1c0 p:0 m:0 dnr:0 00:29:08.431 [2024-06-10 10:56:37.336286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.431 [2024-06-10 10:56:37.336295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3d24060 sqhd:b1c0 p:0 m:0 dnr:0 00:29:08.431 [2024-06-10 10:56:37.336306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.431 [2024-06-10 10:56:37.336317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3d24060 sqhd:b1c0 p:0 m:0 dnr:0 00:29:08.431 [2024-06-10 10:56:37.336459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.431 [2024-06-10 10:56:37.336472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:08.431 [2024-06-10 10:56:37.336481] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.431 [2024-06-10 10:56:37.336495] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:08.431 [2024-06-10 10:56:37.336507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.431 [2024-06-10 10:56:37.336516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8dc40 sqhd:e080 p:0 m:0 dnr:0 00:29:08.431 [2024-06-10 10:56:37.336527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.431 [2024-06-10 10:56:37.336536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8dc40 sqhd:e080 p:0 m:0 dnr:0 00:29:08.431 [2024-06-10 10:56:37.336547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.431 [2024-06-10 10:56:37.336556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8dc40 sqhd:e080 p:0 m:0 dnr:0 00:29:08.431 [2024-06-10 10:56:37.336567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.431 [2024-06-10 10:56:37.336576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:3c8dc40 sqhd:e080 p:0 m:0 dnr:0 00:29:08.431 [2024-06-10 10:56:37.356360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:08.431 [2024-06-10 10:56:37.356376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:08.431 [2024-06-10 10:56:37.356383] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.431 [2024-06-10 10:56:37.360089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360199] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.431 [2024-06-10 10:56:37.360210] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.431 [2024-06-10 10:56:37.360218] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.431 [2024-06-10 10:56:37.360227] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:08.431 [2024-06-10 10:56:37.360281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:08.431 [2024-06-10 10:56:37.360310] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:08.431 [2024-06-10 10:56:37.375413] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375462] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.375482] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:29:08.431 [2024-06-10 10:56:37.375521] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375543] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.375559] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:29:08.431 [2024-06-10 10:56:37.375591] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375611] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.375627] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:29:08.431 [2024-06-10 10:56:37.375660] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375680] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.375696] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:29:08.431 [2024-06-10 10:56:37.375729] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375749] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.375765] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:29:08.431 [2024-06-10 10:56:37.375797] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375818] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.375833] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:29:08.431 [2024-06-10 10:56:37.375928] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375937] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.375942] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:29:08.431 [2024-06-10 10:56:37.375967] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375975] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.375980] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:29:08.431 [2024-06-10 10:56:37.375991] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.431 [2024-06-10 10:56:37.375998] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.431 [2024-06-10 10:56:37.376002] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:29:08.432 [2024-06-10 10:56:37.376013] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:08.432 [2024-06-10 10:56:37.376019] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:08.432 [2024-06-10 10:56:37.376024] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:29:09.069 task offset: 36864 on job bdev=Nvme7n1 fails 00:29:09.069 00:29:09.069 Latency(us) 00:29:09.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.069 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme1n1 ended in about 1.90 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme1n1 : 1.90 134.95 8.43 33.74 0.00 372827.28 51430.16 1002638.38 00:29:09.069 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme2n1 ended in about 1.90 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme2n1 : 1.90 134.91 8.43 33.73 0.00 369798.58 66909.14 1002638.38 00:29:09.069 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme3n1 ended in about 1.90 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme3n1 : 1.90 138.56 8.66 33.72 0.00 358908.33 4899.60 1002638.38 00:29:09.069 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme4n1 ended in about 1.90 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme4n1 : 1.90 168.53 10.53 33.71 0.00 303096.40 5367.71 1002638.38 00:29:09.069 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme5n1 ended in about 1.90 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme5n1 : 1.90 151.63 9.48 33.70 0.00 327954.13 20347.37 1002638.38 00:29:09.069 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme6n1 ended in about 1.90 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme6n1 : 1.90 143.69 8.98 33.69 0.00 339710.58 28711.01 1006632.96 00:29:09.069 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme7n1 ended in about 1.43 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme7n1 : 1.43 178.49 11.16 44.62 0.00 264545.04 34702.87 647121.19 00:29:09.069 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme8n1 ended in about 1.44 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme8n1 : 1.44 178.37 11.15 44.59 0.00 261822.17 42941.68 643126.61 00:29:09.069 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme9n1 ended in about 1.44 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme9n1 : 1.44 178.26 11.14 44.57 0.00 258814.93 51180.50 635137.46 00:29:09.069 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.069 Job: Nvme10n1 ended in about 1.44 seconds with error 00:29:09.069 Verification LBA range: start 0x0 length 0x400 00:29:09.069 Nvme10n1 : 1.44 133.62 8.35 44.54 0.00 319725.47 51430.16 619159.16 00:29:09.069 =================================================================================================================== 00:29:09.069 Total : 1541.02 96.31 380.59 0.00 317697.37 4899.60 1006632.96 00:29:09.069 [2024-06-10 10:56:37.843688] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 19078 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:09.329 rmmod nvme_rdma 00:29:09.329 rmmod nvme_fabrics 00:29:09.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 121: 19078 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:09.329 00:29:09.329 real 0m4.957s 00:29:09.329 user 0m17.119s 00:29:09.329 sys 0m1.150s 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:09.329 ************************************ 00:29:09.329 END TEST nvmf_shutdown_tc3 00:29:09.329 ************************************ 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:09.329 00:29:09.329 real 0m22.867s 00:29:09.329 user 1m8.335s 00:29:09.329 sys 0m7.986s 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:09.329 10:56:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.329 ************************************ 00:29:09.329 END TEST nvmf_shutdown 00:29:09.329 ************************************ 00:29:09.329 10:56:38 nvmf_rdma -- nvmf/nvmf.sh@85 -- # timing_exit target 00:29:09.329 10:56:38 nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:09.329 10:56:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:09.329 10:56:38 nvmf_rdma -- nvmf/nvmf.sh@87 -- # timing_enter host 00:29:09.329 10:56:38 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:09.329 10:56:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:09.329 10:56:38 nvmf_rdma -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:29:09.329 10:56:38 nvmf_rdma -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:29:09.330 10:56:38 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:09.330 10:56:38 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:09.330 10:56:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:09.330 ************************************ 00:29:09.330 START TEST nvmf_multicontroller 00:29:09.330 ************************************ 00:29:09.330 10:56:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:29:09.589 * Looking for test storage... 00:29:09.589 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:09.589 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:29:09.590 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:29:09.590 00:29:09.590 real 0m0.108s 00:29:09.590 user 0m0.053s 00:29:09.590 sys 0m0.059s 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:09.590 10:56:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:09.590 ************************************ 00:29:09.590 END TEST nvmf_multicontroller 00:29:09.590 ************************************ 00:29:09.590 10:56:38 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:29:09.590 10:56:38 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:09.590 10:56:38 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:09.590 10:56:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:09.590 ************************************ 00:29:09.590 START TEST nvmf_aer 00:29:09.590 ************************************ 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:29:09.590 * Looking for test storage... 00:29:09.590 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:09.590 10:56:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.157 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:16.158 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:16.158 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@377 -- # modinfo irdma 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:16.158 Found net devices under 0000:af:00.0: cvl_0_0 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:16.158 Found net devices under 0000:af:00.1: cvl_0_1 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo cvl_0_0 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo cvl_0_1 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:16.158 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:29:16.158 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:16.158 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:29:16.158 altname enp175s0f0np0 00:29:16.158 altname ens801f0np0 00:29:16.158 inet 192.168.100.8/24 scope global cvl_0_0 00:29:16.158 valid_lft forever preferred_lft forever 00:29:16.158 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:29:16.159 valid_lft forever preferred_lft forever 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:29:16.159 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:16.159 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:29:16.159 altname enp175s0f1np1 00:29:16.159 altname ens801f1np1 00:29:16.159 inet 192.168.100.9/24 scope global cvl_0_1 00:29:16.159 valid_lft forever preferred_lft forever 00:29:16.159 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:29:16.159 valid_lft forever preferred_lft forever 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo cvl_0_0 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo cvl_0_1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:16.159 192.168.100.9' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:16.159 192.168.100.9' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:16.159 192.168.100.9' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=23181 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 23181 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 23181 ']' 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:16.159 10:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.159 [2024-06-10 10:56:44.495085] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:16.159 [2024-06-10 10:56:44.495128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.159 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.159 [2024-06-10 10:56:44.554682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.159 [2024-06-10 10:56:44.632621] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.159 [2024-06-10 10:56:44.632659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.159 [2024-06-10 10:56:44.632665] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.159 [2024-06-10 10:56:44.632671] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.159 [2024-06-10 10:56:44.632676] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.159 [2024-06-10 10:56:44.632717] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.159 [2024-06-10 10:56:44.632737] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.159 [2024-06-10 10:56:44.632832] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.159 [2024-06-10 10:56:44.632833] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.419 [2024-06-10 10:56:45.360090] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x24948f0/0x2493f30) succeed. 00:29:16.419 [2024-06-10 10:56:45.369038] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x2495ca0/0x24944b0) succeed. 00:29:16.419 [2024-06-10 10:56:45.369061] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.419 Malloc0 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.419 [2024-06-10 10:56:45.424095] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.419 [ 00:29:16.419 { 00:29:16.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:16.419 "subtype": "Discovery", 00:29:16.419 "listen_addresses": [], 00:29:16.419 "allow_any_host": true, 00:29:16.419 "hosts": [] 00:29:16.419 }, 00:29:16.419 { 00:29:16.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.419 "subtype": "NVMe", 00:29:16.419 "listen_addresses": [ 00:29:16.419 { 00:29:16.419 "trtype": "RDMA", 00:29:16.419 "adrfam": "IPv4", 00:29:16.419 "traddr": "192.168.100.8", 00:29:16.419 "trsvcid": "4420" 00:29:16.419 } 00:29:16.419 ], 00:29:16.419 "allow_any_host": true, 00:29:16.419 "hosts": [], 00:29:16.419 "serial_number": "SPDK00000000000001", 00:29:16.419 "model_number": "SPDK bdev Controller", 00:29:16.419 "max_namespaces": 2, 00:29:16.419 "min_cntlid": 1, 00:29:16.419 "max_cntlid": 65519, 00:29:16.419 "namespaces": [ 00:29:16.419 { 00:29:16.419 "nsid": 1, 00:29:16.419 "bdev_name": "Malloc0", 00:29:16.419 "name": "Malloc0", 00:29:16.419 "nguid": "4E12E51BCED04DEC9DCE1943FEFC3652", 00:29:16.419 "uuid": "4e12e51b-ced0-4dec-9dce-1943fefc3652" 00:29:16.419 } 00:29:16.419 ] 00:29:16.419 } 00:29:16.419 ] 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=23225 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:29:16.419 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:29:16.679 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.679 Malloc1 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.679 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.938 [ 00:29:16.938 { 00:29:16.938 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:16.938 "subtype": "Discovery", 00:29:16.938 "listen_addresses": [], 00:29:16.938 "allow_any_host": true, 00:29:16.938 "hosts": [] 00:29:16.938 }, 00:29:16.938 { 00:29:16.938 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.938 "subtype": "NVMe", 00:29:16.938 "listen_addresses": [ 00:29:16.938 { 00:29:16.938 "trtype": "RDMA", 00:29:16.938 "adrfam": "IPv4", 00:29:16.938 "traddr": "192.168.100.8", 00:29:16.938 "trsvcid": "4420" 00:29:16.938 } 00:29:16.938 ], 00:29:16.938 "allow_any_host": true, 00:29:16.938 "hosts": [], 00:29:16.938 "serial_number": "SPDK00000000000001", 00:29:16.938 "model_number": "SPDK bdev Controller", 00:29:16.938 "max_namespaces": 2, 00:29:16.938 "min_cntlid": 1, 00:29:16.938 "max_cntlid": 65519, 00:29:16.938 "namespaces": [ 00:29:16.938 { 00:29:16.938 "nsid": 1, 00:29:16.938 "bdev_name": "Malloc0", 00:29:16.938 "name": "Malloc0", 00:29:16.938 "nguid": "4E12E51BCED04DEC9DCE1943FEFC3652", 00:29:16.938 "uuid": "4e12e51b-ced0-4dec-9dce-1943fefc3652" 00:29:16.938 }, 00:29:16.938 { 00:29:16.938 "nsid": 2, 00:29:16.938 "bdev_name": "Malloc1", 00:29:16.938 "name": "Malloc1", 00:29:16.938 "nguid": "0E986976CAAC4FBBBC39B48BF1FC0776", 00:29:16.939 "uuid": "0e986976-caac-4fbb-bc39-b48bf1fc0776" 00:29:16.939 } 00:29:16.939 ] 00:29:16.939 } 00:29:16.939 ] 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 23225 00:29:16.939 Asynchronous Event Request test 00:29:16.939 Attaching to 192.168.100.8 00:29:16.939 Attached to 192.168.100.8 00:29:16.939 Registering asynchronous event callbacks... 00:29:16.939 Starting namespace attribute notice tests for all controllers... 00:29:16.939 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:16.939 aer_cb - Changed Namespace 00:29:16.939 Cleaning up... 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:16.939 rmmod nvme_rdma 00:29:16.939 rmmod nvme_fabrics 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 23181 ']' 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 23181 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 23181 ']' 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 23181 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 23181 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 23181' 00:29:16.939 killing process with pid 23181 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@968 -- # kill 23181 00:29:16.939 10:56:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@973 -- # wait 23181 00:29:17.199 10:56:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.199 10:56:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:17.199 00:29:17.199 real 0m7.607s 00:29:17.199 user 0m7.573s 00:29:17.199 sys 0m4.835s 00:29:17.199 10:56:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:17.199 10:56:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:17.199 ************************************ 00:29:17.199 END TEST nvmf_aer 00:29:17.199 ************************************ 00:29:17.199 10:56:46 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:29:17.199 10:56:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:17.199 10:56:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:17.199 10:56:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:17.199 ************************************ 00:29:17.199 START TEST nvmf_async_init 00:29:17.199 ************************************ 00:29:17.199 10:56:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:29:17.459 * Looking for test storage... 00:29:17.459 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:17.459 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e6a14d64006a4a8985c0e02c18a80c86 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:17.460 10:56:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:24.030 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:24.030 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@377 -- # modinfo irdma 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:24.030 Found net devices under 0000:af:00.0: cvl_0_0 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:24.030 Found net devices under 0000:af:00.1: cvl_0_1 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo cvl_0_0 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.030 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo cvl_0_1 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:29:24.031 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:24.031 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:29:24.031 altname enp175s0f0np0 00:29:24.031 altname ens801f0np0 00:29:24.031 inet 192.168.100.8/24 scope global cvl_0_0 00:29:24.031 valid_lft forever preferred_lft forever 00:29:24.031 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:29:24.031 valid_lft forever preferred_lft forever 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:29:24.031 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:24.031 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:29:24.031 altname enp175s0f1np1 00:29:24.031 altname ens801f1np1 00:29:24.031 inet 192.168.100.9/24 scope global cvl_0_1 00:29:24.031 valid_lft forever preferred_lft forever 00:29:24.031 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:29:24.031 valid_lft forever preferred_lft forever 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:24.031 10:56:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo cvl_0_0 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo cvl_0_1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:24.031 192.168.100.9' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:24.031 192.168.100.9' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:24.031 192.168.100.9' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=26757 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 26757 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 26757 ']' 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.031 [2024-06-10 10:56:52.121517] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:24.031 [2024-06-10 10:56:52.121567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.031 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.031 [2024-06-10 10:56:52.183286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.031 [2024-06-10 10:56:52.256456] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.031 [2024-06-10 10:56:52.256496] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.031 [2024-06-10 10:56:52.256502] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.031 [2024-06-10 10:56:52.256508] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.031 [2024-06-10 10:56:52.256512] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.031 [2024-06-10 10:56:52.256537] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.031 [2024-06-10 10:56:52.973030] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x23677e0/0x2366e20) succeed. 00:29:24.031 [2024-06-10 10:56:52.981643] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x2368a90/0x23673a0) succeed. 00:29:24.031 [2024-06-10 10:56:52.981665] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.031 10:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:24.032 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.032 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.032 null0 00:29:24.032 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.032 10:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:24.032 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.032 10:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e6a14d64006a4a8985c0e02c18a80c86 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.032 [2024-06-10 10:56:53.023147] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.032 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.291 nvme0n1 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.291 [ 00:29:24.291 { 00:29:24.291 "name": "nvme0n1", 00:29:24.291 "aliases": [ 00:29:24.291 "e6a14d64-006a-4a89-85c0-e02c18a80c86" 00:29:24.291 ], 00:29:24.291 "product_name": "NVMe disk", 00:29:24.291 "block_size": 512, 00:29:24.291 "num_blocks": 2097152, 00:29:24.291 "uuid": "e6a14d64-006a-4a89-85c0-e02c18a80c86", 00:29:24.291 "assigned_rate_limits": { 00:29:24.291 "rw_ios_per_sec": 0, 00:29:24.291 "rw_mbytes_per_sec": 0, 00:29:24.291 "r_mbytes_per_sec": 0, 00:29:24.291 "w_mbytes_per_sec": 0 00:29:24.291 }, 00:29:24.291 "claimed": false, 00:29:24.291 "zoned": false, 00:29:24.291 "supported_io_types": { 00:29:24.291 "read": true, 00:29:24.291 "write": true, 00:29:24.291 "unmap": false, 00:29:24.291 "write_zeroes": true, 00:29:24.291 "flush": true, 00:29:24.291 "reset": true, 00:29:24.291 "compare": true, 00:29:24.291 "compare_and_write": true, 00:29:24.291 "abort": true, 00:29:24.291 "nvme_admin": true, 00:29:24.291 "nvme_io": true 00:29:24.291 }, 00:29:24.291 "memory_domains": [ 00:29:24.291 { 00:29:24.291 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:24.291 "dma_device_type": 0 00:29:24.291 } 00:29:24.291 ], 00:29:24.291 "driver_specific": { 00:29:24.291 "nvme": [ 00:29:24.291 { 00:29:24.291 "trid": { 00:29:24.291 "trtype": "RDMA", 00:29:24.291 "adrfam": "IPv4", 00:29:24.291 "traddr": "192.168.100.8", 00:29:24.291 "trsvcid": "4420", 00:29:24.291 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:24.291 }, 00:29:24.291 "ctrlr_data": { 00:29:24.291 "cntlid": 1, 00:29:24.291 "vendor_id": "0x8086", 00:29:24.291 "model_number": "SPDK bdev Controller", 00:29:24.291 "serial_number": "00000000000000000000", 00:29:24.291 "firmware_revision": "24.09", 00:29:24.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.291 "oacs": { 00:29:24.291 "security": 0, 00:29:24.291 "format": 0, 00:29:24.291 "firmware": 0, 00:29:24.291 "ns_manage": 0 00:29:24.291 }, 00:29:24.291 "multi_ctrlr": true, 00:29:24.291 "ana_reporting": false 00:29:24.291 }, 00:29:24.291 "vs": { 00:29:24.291 "nvme_version": "1.3" 00:29:24.291 }, 00:29:24.291 "ns_data": { 00:29:24.291 "id": 1, 00:29:24.291 "can_share": true 00:29:24.291 } 00:29:24.291 } 00:29:24.291 ], 00:29:24.291 "mp_policy": "active_passive" 00:29:24.291 } 00:29:24.291 } 00:29:24.291 ] 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.291 [2024-06-10 10:56:53.124274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:24.291 [2024-06-10 10:56:53.149524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:24.291 [2024-06-10 10:56:53.171173] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.291 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.292 [ 00:29:24.292 { 00:29:24.292 "name": "nvme0n1", 00:29:24.292 "aliases": [ 00:29:24.292 "e6a14d64-006a-4a89-85c0-e02c18a80c86" 00:29:24.292 ], 00:29:24.292 "product_name": "NVMe disk", 00:29:24.292 "block_size": 512, 00:29:24.292 "num_blocks": 2097152, 00:29:24.292 "uuid": "e6a14d64-006a-4a89-85c0-e02c18a80c86", 00:29:24.292 "assigned_rate_limits": { 00:29:24.292 "rw_ios_per_sec": 0, 00:29:24.292 "rw_mbytes_per_sec": 0, 00:29:24.292 "r_mbytes_per_sec": 0, 00:29:24.292 "w_mbytes_per_sec": 0 00:29:24.292 }, 00:29:24.292 "claimed": false, 00:29:24.292 "zoned": false, 00:29:24.292 "supported_io_types": { 00:29:24.292 "read": true, 00:29:24.292 "write": true, 00:29:24.292 "unmap": false, 00:29:24.292 "write_zeroes": true, 00:29:24.292 "flush": true, 00:29:24.292 "reset": true, 00:29:24.292 "compare": true, 00:29:24.292 "compare_and_write": true, 00:29:24.292 "abort": true, 00:29:24.292 "nvme_admin": true, 00:29:24.292 "nvme_io": true 00:29:24.292 }, 00:29:24.292 "memory_domains": [ 00:29:24.292 { 00:29:24.292 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:24.292 "dma_device_type": 0 00:29:24.292 } 00:29:24.292 ], 00:29:24.292 "driver_specific": { 00:29:24.292 "nvme": [ 00:29:24.292 { 00:29:24.292 "trid": { 00:29:24.292 "trtype": "RDMA", 00:29:24.292 "adrfam": "IPv4", 00:29:24.292 "traddr": "192.168.100.8", 00:29:24.292 "trsvcid": "4420", 00:29:24.292 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:24.292 }, 00:29:24.292 "ctrlr_data": { 00:29:24.292 "cntlid": 2, 00:29:24.292 "vendor_id": "0x8086", 00:29:24.292 "model_number": "SPDK bdev Controller", 00:29:24.292 "serial_number": "00000000000000000000", 00:29:24.292 "firmware_revision": "24.09", 00:29:24.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.292 "oacs": { 00:29:24.292 "security": 0, 00:29:24.292 "format": 0, 00:29:24.292 "firmware": 0, 00:29:24.292 "ns_manage": 0 00:29:24.292 }, 00:29:24.292 "multi_ctrlr": true, 00:29:24.292 "ana_reporting": false 00:29:24.292 }, 00:29:24.292 "vs": { 00:29:24.292 "nvme_version": "1.3" 00:29:24.292 }, 00:29:24.292 "ns_data": { 00:29:24.292 "id": 1, 00:29:24.292 "can_share": true 00:29:24.292 } 00:29:24.292 } 00:29:24.292 ], 00:29:24.292 "mp_policy": "active_passive" 00:29:24.292 } 00:29:24.292 } 00:29:24.292 ] 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rgKIkBxjo4 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rgKIkBxjo4 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.292 [2024-06-10 10:56:53.242217] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rgKIkBxjo4 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rgKIkBxjo4 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.292 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.292 [2024-06-10 10:56:53.262254] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:24.551 nvme0n1 00:29:24.551 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.551 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:24.551 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.551 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.551 [ 00:29:24.551 { 00:29:24.551 "name": "nvme0n1", 00:29:24.551 "aliases": [ 00:29:24.551 "e6a14d64-006a-4a89-85c0-e02c18a80c86" 00:29:24.551 ], 00:29:24.551 "product_name": "NVMe disk", 00:29:24.551 "block_size": 512, 00:29:24.551 "num_blocks": 2097152, 00:29:24.551 "uuid": "e6a14d64-006a-4a89-85c0-e02c18a80c86", 00:29:24.551 "assigned_rate_limits": { 00:29:24.551 "rw_ios_per_sec": 0, 00:29:24.551 "rw_mbytes_per_sec": 0, 00:29:24.551 "r_mbytes_per_sec": 0, 00:29:24.551 "w_mbytes_per_sec": 0 00:29:24.551 }, 00:29:24.551 "claimed": false, 00:29:24.551 "zoned": false, 00:29:24.551 "supported_io_types": { 00:29:24.551 "read": true, 00:29:24.551 "write": true, 00:29:24.551 "unmap": false, 00:29:24.551 "write_zeroes": true, 00:29:24.551 "flush": true, 00:29:24.551 "reset": true, 00:29:24.551 "compare": true, 00:29:24.551 "compare_and_write": true, 00:29:24.551 "abort": true, 00:29:24.551 "nvme_admin": true, 00:29:24.551 "nvme_io": true 00:29:24.551 }, 00:29:24.551 "memory_domains": [ 00:29:24.551 { 00:29:24.551 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:24.551 "dma_device_type": 0 00:29:24.551 } 00:29:24.551 ], 00:29:24.551 "driver_specific": { 00:29:24.551 "nvme": [ 00:29:24.551 { 00:29:24.551 "trid": { 00:29:24.551 "trtype": "RDMA", 00:29:24.551 "adrfam": "IPv4", 00:29:24.551 "traddr": "192.168.100.8", 00:29:24.551 "trsvcid": "4421", 00:29:24.551 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:24.551 }, 00:29:24.551 "ctrlr_data": { 00:29:24.551 "cntlid": 3, 00:29:24.551 "vendor_id": "0x8086", 00:29:24.551 "model_number": "SPDK bdev Controller", 00:29:24.551 "serial_number": "00000000000000000000", 00:29:24.551 "firmware_revision": "24.09", 00:29:24.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.551 "oacs": { 00:29:24.551 "security": 0, 00:29:24.551 "format": 0, 00:29:24.551 "firmware": 0, 00:29:24.551 "ns_manage": 0 00:29:24.551 }, 00:29:24.551 "multi_ctrlr": true, 00:29:24.551 "ana_reporting": false 00:29:24.551 }, 00:29:24.552 "vs": { 00:29:24.552 "nvme_version": "1.3" 00:29:24.552 }, 00:29:24.552 "ns_data": { 00:29:24.552 "id": 1, 00:29:24.552 "can_share": true 00:29:24.552 } 00:29:24.552 } 00:29:24.552 ], 00:29:24.552 "mp_policy": "active_passive" 00:29:24.552 } 00:29:24.552 } 00:29:24.552 ] 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.rgKIkBxjo4 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:24.552 rmmod nvme_rdma 00:29:24.552 rmmod nvme_fabrics 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 26757 ']' 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 26757 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 26757 ']' 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 26757 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 26757 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 26757' 00:29:24.552 killing process with pid 26757 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 26757 00:29:24.552 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 26757 00:29:24.811 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:24.811 10:56:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:24.811 00:29:24.811 real 0m7.484s 00:29:24.811 user 0m3.447s 00:29:24.811 sys 0m4.695s 00:29:24.811 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:24.811 10:56:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:24.811 ************************************ 00:29:24.811 END TEST nvmf_async_init 00:29:24.811 ************************************ 00:29:24.811 10:56:53 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:29:24.811 10:56:53 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:24.811 10:56:53 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:24.811 10:56:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:24.811 ************************************ 00:29:24.811 START TEST dma 00:29:24.811 ************************************ 00:29:24.811 10:56:53 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:29:24.811 * Looking for test storage... 00:29:24.811 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:29:24.811 10:56:53 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.811 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:24.811 10:56:53 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.811 10:56:53 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.811 10:56:53 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.811 10:56:53 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.811 10:56:53 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.812 10:56:53 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.812 10:56:53 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:29:24.812 10:56:53 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.812 10:56:53 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:29:24.812 10:56:53 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:29:24.812 10:56:53 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:29:24.812 10:56:53 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:29:24.812 10:56:53 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.812 10:56:53 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.812 10:56:53 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:24.812 10:56:53 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:29:24.812 10:56:53 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:31.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:31.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@377 -- # modinfo irdma 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:31.374 Found net devices under 0000:af:00.0: cvl_0_0 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:31.374 Found net devices under 0000:af:00.1: cvl_0_1 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo cvl_0_0 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo cvl_0_1 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:31.374 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:29:31.375 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:31.375 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:29:31.375 altname enp175s0f0np0 00:29:31.375 altname ens801f0np0 00:29:31.375 inet 192.168.100.8/24 scope global cvl_0_0 00:29:31.375 valid_lft forever preferred_lft forever 00:29:31.375 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:29:31.375 valid_lft forever preferred_lft forever 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:29:31.375 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:29:31.375 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:29:31.375 altname enp175s0f1np1 00:29:31.375 altname ens801f1np1 00:29:31.375 inet 192.168.100.9/24 scope global cvl_0_1 00:29:31.375 valid_lft forever preferred_lft forever 00:29:31.375 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:29:31.375 valid_lft forever preferred_lft forever 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo cvl_0_0 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo cvl_0_1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:31.375 192.168.100.9' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:31.375 192.168.100.9' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:31.375 192.168.100.9' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:31.375 10:56:59 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:31.375 10:56:59 nvmf_rdma.dma -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:31.375 10:56:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=30327 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 30327 00:29:31.375 10:56:59 nvmf_rdma.dma -- common/autotest_common.sh@830 -- # '[' -z 30327 ']' 00:29:31.375 10:56:59 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.375 10:56:59 nvmf_rdma.dma -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:31.375 10:56:59 nvmf_rdma.dma -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.375 10:56:59 nvmf_rdma.dma -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:31.375 10:56:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.375 10:56:59 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:31.375 [2024-06-10 10:56:59.561088] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:31.375 [2024-06-10 10:56:59.561132] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.375 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.375 [2024-06-10 10:56:59.620855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:31.375 [2024-06-10 10:56:59.698809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.375 [2024-06-10 10:56:59.698843] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.375 [2024-06-10 10:56:59.698850] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.375 [2024-06-10 10:56:59.698856] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.375 [2024-06-10 10:56:59.698861] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.375 [2024-06-10 10:56:59.698908] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.375 [2024-06-10 10:56:59.698911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.375 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:31.375 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@863 -- # return 0 00:29:31.375 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.375 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:31.375 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.375 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.375 10:57:00 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:31.375 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.375 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.375 [2024-06-10 10:57:00.397367] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x18e62d0/0x18e5910) succeed. 00:29:31.634 [2024-06-10 10:57:00.406272] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x18e7580/0x18e5e90) succeed. 00:29:31.634 [2024-06-10 10:57:00.406295] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.634 10:57:00 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.634 Malloc0 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.634 10:57:00 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.634 10:57:00 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.634 10:57:00 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:31.634 [2024-06-10 10:57:00.508371] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:31.634 10:57:00 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.634 10:57:00 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:29:31.635 10:57:00 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:29:31.635 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:29:31.635 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:29:31.635 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:31.635 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:31.635 { 00:29:31.635 "params": { 00:29:31.635 "name": "Nvme$subsystem", 00:29:31.635 "trtype": "$TEST_TRANSPORT", 00:29:31.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:31.635 "adrfam": "ipv4", 00:29:31.635 "trsvcid": "$NVMF_PORT", 00:29:31.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:31.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:31.635 "hdgst": ${hdgst:-false}, 00:29:31.635 "ddgst": ${ddgst:-false} 00:29:31.635 }, 00:29:31.635 "method": "bdev_nvme_attach_controller" 00:29:31.635 } 00:29:31.635 EOF 00:29:31.635 )") 00:29:31.635 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:29:31.635 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:29:31.635 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:29:31.635 10:57:00 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:31.635 "params": { 00:29:31.635 "name": "Nvme0", 00:29:31.635 "trtype": "rdma", 00:29:31.635 "traddr": "192.168.100.8", 00:29:31.635 "adrfam": "ipv4", 00:29:31.635 "trsvcid": "4420", 00:29:31.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:31.635 "hdgst": false, 00:29:31.635 "ddgst": false 00:29:31.635 }, 00:29:31.635 "method": "bdev_nvme_attach_controller" 00:29:31.635 }' 00:29:31.635 [2024-06-10 10:57:00.554016] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:31.635 [2024-06-10 10:57:00.554068] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30608 ] 00:29:31.635 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.635 [2024-06-10 10:57:00.607826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:31.893 [2024-06-10 10:57:00.680936] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.893 [2024-06-10 10:57:00.680939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.175 bdev Nvme0n1 reports 1 memory domains 00:29:37.175 bdev Nvme0n1 supports RDMA memory domain 00:29:37.175 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:37.175 ========================================================================== 00:29:37.175 Latency [us] 00:29:37.175 IOPS MiB/s Average min max 00:29:37.175 Core 2: 21079.37 82.34 758.36 256.91 11691.33 00:29:37.175 Core 3: 21302.55 83.21 750.37 276.30 11870.56 00:29:37.175 ========================================================================== 00:29:37.175 Total : 42381.91 165.55 754.34 256.91 11870.56 00:29:37.175 00:29:37.175 Total operations: 211930, translate 211930 pull_push 0 memzero 0 00:29:37.175 10:57:06 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:29:37.175 10:57:06 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:29:37.175 10:57:06 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:29:37.175 [2024-06-10 10:57:06.096517] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:37.175 [2024-06-10 10:57:06.096570] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid31598 ] 00:29:37.175 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.175 [2024-06-10 10:57:06.150344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:37.434 [2024-06-10 10:57:06.222780] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.434 [2024-06-10 10:57:06.222783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.772 bdev Malloc0 reports 2 memory domains 00:29:42.772 bdev Malloc0 doesn't support RDMA memory domain 00:29:42.772 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:42.772 ========================================================================== 00:29:42.772 Latency [us] 00:29:42.772 IOPS MiB/s Average min max 00:29:42.772 Core 2: 14551.03 56.84 1098.85 424.97 1431.03 00:29:42.772 Core 3: 14494.85 56.62 1103.10 447.07 2748.59 00:29:42.772 ========================================================================== 00:29:42.772 Total : 29045.88 113.46 1100.97 424.97 2748.59 00:29:42.772 00:29:42.772 Total operations: 145283, translate 0 pull_push 581132 memzero 0 00:29:42.772 10:57:11 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:29:42.772 10:57:11 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:29:42.772 10:57:11 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:29:42.772 10:57:11 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:29:42.772 Ignoring -M option 00:29:42.772 [2024-06-10 10:57:11.568849] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:42.772 [2024-06-10 10:57:11.568898] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32881 ] 00:29:42.772 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.772 [2024-06-10 10:57:11.622041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:42.772 [2024-06-10 10:57:11.692629] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.772 [2024-06-10 10:57:11.692633] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.335 bdev a2172800-a8aa-4c47-a087-4cf9929ca3af reports 1 memory domains 00:29:49.335 bdev a2172800-a8aa-4c47-a087-4cf9929ca3af supports RDMA memory domain 00:29:49.335 Initialization complete, running randread IO for 5 sec on 2 cores 00:29:49.336 ========================================================================== 00:29:49.336 Latency [us] 00:29:49.336 IOPS MiB/s Average min max 00:29:49.336 Core 2: 80206.49 313.31 198.75 87.06 3505.35 00:29:49.336 Core 3: 81702.34 319.15 195.09 89.15 3570.32 00:29:49.336 ========================================================================== 00:29:49.336 Total : 161908.84 632.46 196.91 87.06 3570.32 00:29:49.336 00:29:49.336 Total operations: 809624, translate 0 pull_push 0 memzero 809624 00:29:49.336 10:57:17 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:29:49.336 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.336 [2024-06-10 10:57:17.209170] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:50.712 Initializing NVMe Controllers 00:29:50.712 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:29:50.712 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:50.712 Initialization complete. Launching workers. 00:29:50.712 ======================================================== 00:29:50.712 Latency(us) 00:29:50.712 Device Information : IOPS MiB/s Average min max 00:29:50.712 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7977.13 6997.10 8961.52 00:29:50.712 ======================================================== 00:29:50.712 Total : 2016.00 7.88 7977.13 6997.10 8961.52 00:29:50.712 00:29:50.712 10:57:19 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:29:50.712 10:57:19 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:29:50.712 10:57:19 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:29:50.712 10:57:19 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:29:50.712 [2024-06-10 10:57:19.545035] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:29:50.712 [2024-06-10 10:57:19.545082] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34024 ] 00:29:50.712 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.712 [2024-06-10 10:57:19.600923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:50.712 [2024-06-10 10:57:19.674123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.712 [2024-06-10 10:57:19.674125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.278 bdev 4afbaa1d-7fdb-4f3d-bfad-7b68dbc84432 reports 1 memory domains 00:29:57.278 bdev 4afbaa1d-7fdb-4f3d-bfad-7b68dbc84432 supports RDMA memory domain 00:29:57.278 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:57.278 ========================================================================== 00:29:57.278 Latency [us] 00:29:57.278 IOPS MiB/s Average min max 00:29:57.278 Core 2: 19769.32 77.22 808.67 13.26 13086.71 00:29:57.278 Core 3: 19996.47 78.11 799.50 14.06 12910.81 00:29:57.278 ========================================================================== 00:29:57.278 Total : 39765.79 155.34 804.06 13.26 13086.71 00:29:57.278 00:29:57.278 Total operations: 198878, translate 198774 pull_push 0 memzero 104 00:29:57.278 10:57:25 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:57.278 10:57:25 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:57.278 rmmod nvme_rdma 00:29:57.278 rmmod nvme_fabrics 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 30327 ']' 00:29:57.278 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 30327 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@949 -- # '[' -z 30327 ']' 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # kill -0 30327 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # uname 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 30327 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # echo 'killing process with pid 30327' 00:29:57.279 killing process with pid 30327 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@968 -- # kill 30327 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@973 -- # wait 30327 00:29:57.279 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:57.279 10:57:25 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:57.279 00:29:57.279 real 0m31.777s 00:29:57.279 user 1m35.878s 00:29:57.279 sys 0m5.409s 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:57.279 10:57:25 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:29:57.279 ************************************ 00:29:57.279 END TEST dma 00:29:57.279 ************************************ 00:29:57.279 10:57:25 nvmf_rdma -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:29:57.279 10:57:25 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:57.279 10:57:25 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:57.279 10:57:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:57.279 ************************************ 00:29:57.279 START TEST nvmf_identify 00:29:57.279 ************************************ 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:29:57.279 * Looking for test storage... 00:29:57.279 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.279 10:57:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:02.548 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:02.548 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@377 -- # modinfo irdma 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:02.548 Found net devices under 0000:af:00.0: cvl_0_0 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:02.548 Found net devices under 0000:af:00.1: cvl_0_1 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:02.548 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo cvl_0_0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo cvl_0_1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:30:02.549 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:30:02.549 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:30:02.549 altname enp175s0f0np0 00:30:02.549 altname ens801f0np0 00:30:02.549 inet 192.168.100.8/24 scope global cvl_0_0 00:30:02.549 valid_lft forever preferred_lft forever 00:30:02.549 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:30:02.549 valid_lft forever preferred_lft forever 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:30:02.549 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:30:02.549 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:30:02.549 altname enp175s0f1np1 00:30:02.549 altname ens801f1np1 00:30:02.549 inet 192.168.100.9/24 scope global cvl_0_1 00:30:02.549 valid_lft forever preferred_lft forever 00:30:02.549 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:30:02.549 valid_lft forever preferred_lft forever 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo cvl_0_0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo cvl_0_1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:02.549 192.168.100.9' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:02.549 192.168.100.9' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:02.549 192.168.100.9' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=38454 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 38454 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 38454 ']' 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:02.549 10:57:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:02.549 [2024-06-10 10:57:31.511097] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:30:02.549 [2024-06-10 10:57:31.511141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.549 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.549 [2024-06-10 10:57:31.569903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.808 [2024-06-10 10:57:31.650614] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.808 [2024-06-10 10:57:31.650648] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.808 [2024-06-10 10:57:31.650655] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.808 [2024-06-10 10:57:31.650661] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.808 [2024-06-10 10:57:31.650667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.808 [2024-06-10 10:57:31.650700] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.808 [2024-06-10 10:57:31.650796] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.808 [2024-06-10 10:57:31.650881] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.808 [2024-06-10 10:57:31.650882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.377 [2024-06-10 10:57:32.342107] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x20af8f0/0x20aef30) succeed. 00:30:03.377 [2024-06-10 10:57:32.350962] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x20b0ca0/0x20af4b0) succeed. 00:30:03.377 [2024-06-10 10:57:32.350989] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.377 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.639 Malloc0 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.639 [2024-06-10 10:57:32.437821] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.639 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.639 [ 00:30:03.639 { 00:30:03.639 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:03.639 "subtype": "Discovery", 00:30:03.640 "listen_addresses": [ 00:30:03.640 { 00:30:03.640 "trtype": "RDMA", 00:30:03.640 "adrfam": "IPv4", 00:30:03.640 "traddr": "192.168.100.8", 00:30:03.640 "trsvcid": "4420" 00:30:03.640 } 00:30:03.640 ], 00:30:03.640 "allow_any_host": true, 00:30:03.640 "hosts": [] 00:30:03.640 }, 00:30:03.640 { 00:30:03.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.640 "subtype": "NVMe", 00:30:03.640 "listen_addresses": [ 00:30:03.640 { 00:30:03.640 "trtype": "RDMA", 00:30:03.640 "adrfam": "IPv4", 00:30:03.640 "traddr": "192.168.100.8", 00:30:03.640 "trsvcid": "4420" 00:30:03.640 } 00:30:03.640 ], 00:30:03.640 "allow_any_host": true, 00:30:03.640 "hosts": [], 00:30:03.640 "serial_number": "SPDK00000000000001", 00:30:03.640 "model_number": "SPDK bdev Controller", 00:30:03.640 "max_namespaces": 32, 00:30:03.640 "min_cntlid": 1, 00:30:03.640 "max_cntlid": 65519, 00:30:03.640 "namespaces": [ 00:30:03.640 { 00:30:03.640 "nsid": 1, 00:30:03.640 "bdev_name": "Malloc0", 00:30:03.640 "name": "Malloc0", 00:30:03.640 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:03.640 "eui64": "ABCDEF0123456789", 00:30:03.640 "uuid": "0e8bd3d1-2382-4b08-b526-3224ebca98a7" 00:30:03.640 } 00:30:03.640 ] 00:30:03.640 } 00:30:03.640 ] 00:30:03.640 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.640 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:03.640 [2024-06-10 10:57:32.489774] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:30:03.640 [2024-06-10 10:57:32.489823] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38510 ] 00:30:03.640 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.640 [2024-06-10 10:57:32.521177] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:03.640 [2024-06-10 10:57:32.521248] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:30:03.640 [2024-06-10 10:57:32.521266] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:30:03.640 [2024-06-10 10:57:32.521270] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:30:03.640 [2024-06-10 10:57:32.521299] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:03.640 [2024-06-10 10:57:32.533628] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:30:03.640 [2024-06-10 10:57:32.546436] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:03.640 [2024-06-10 10:57:32.546446] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:30:03.640 [2024-06-10 10:57:32.546452] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546457] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546462] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546466] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546471] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546475] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546479] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546484] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546488] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546492] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546497] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546501] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546505] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546509] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546514] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546518] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546522] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546526] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546530] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546535] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546539] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546546] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546550] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546555] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546559] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546563] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546567] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546572] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546576] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546580] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546584] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546588] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:30:03.640 [2024-06-10 10:57:32.546592] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:03.640 [2024-06-10 10:57:32.546595] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:30:03.640 [2024-06-10 10:57:32.546611] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.546624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.553964] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.640 [2024-06-10 10:57:32.553972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:03.640 [2024-06-10 10:57:32.553979] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.553986] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:03.640 [2024-06-10 10:57:32.553992] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:03.640 [2024-06-10 10:57:32.553996] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:03.640 [2024-06-10 10:57:32.554006] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.554013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.640 [2024-06-10 10:57:32.554041] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.640 [2024-06-10 10:57:32.554045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:30:03.640 [2024-06-10 10:57:32.554050] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:03.640 [2024-06-10 10:57:32.554054] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.554059] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:03.640 [2024-06-10 10:57:32.554065] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.640 [2024-06-10 10:57:32.554071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.640 [2024-06-10 10:57:32.554098] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.640 [2024-06-10 10:57:32.554102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:30:03.640 [2024-06-10 10:57:32.554107] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:03.641 [2024-06-10 10:57:32.554111] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554116] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:03.641 [2024-06-10 10:57:32.554122] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.641 [2024-06-10 10:57:32.554151] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554160] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:03.641 [2024-06-10 10:57:32.554164] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554170] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.641 [2024-06-10 10:57:32.554206] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554214] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:03.641 [2024-06-10 10:57:32.554218] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:03.641 [2024-06-10 10:57:32.554222] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554227] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:03.641 [2024-06-10 10:57:32.554332] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:03.641 [2024-06-10 10:57:32.554336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:03.641 [2024-06-10 10:57:32.554343] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.641 [2024-06-10 10:57:32.554376] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554385] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:03.641 [2024-06-10 10:57:32.554388] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554396] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.641 [2024-06-10 10:57:32.554426] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554434] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:03.641 [2024-06-10 10:57:32.554438] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:03.641 [2024-06-10 10:57:32.554442] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554447] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:03.641 [2024-06-10 10:57:32.554454] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:03.641 [2024-06-10 10:57:32.554461] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554508] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554519] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:03.641 [2024-06-10 10:57:32.554523] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:03.641 [2024-06-10 10:57:32.554527] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:03.641 [2024-06-10 10:57:32.554531] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 6 00:30:03.641 [2024-06-10 10:57:32.554535] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:03.641 [2024-06-10 10:57:32.554539] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:03.641 [2024-06-10 10:57:32.554543] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554551] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:03.641 [2024-06-10 10:57:32.554558] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.641 [2024-06-10 10:57:32.554594] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554608] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.641 [2024-06-10 10:57:32.554620] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.641 [2024-06-10 10:57:32.554630] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.641 [2024-06-10 10:57:32.554640] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.641 [2024-06-10 10:57:32.554649] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:03.641 [2024-06-10 10:57:32.554653] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:03.641 [2024-06-10 10:57:32.554665] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.641 [2024-06-10 10:57:32.554698] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554706] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:03.641 [2024-06-10 10:57:32.554712] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:03.641 [2024-06-10 10:57:32.554716] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554724] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554757] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554766] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554773] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:03.641 [2024-06-10 10:57:32.554795] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554808] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.641 [2024-06-10 10:57:32.554842] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:03.641 [2024-06-10 10:57:32.554856] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554866] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1d5e686b 00:30:03.641 [2024-06-10 10:57:32.554870] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.641 [2024-06-10 10:57:32.554874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:03.642 [2024-06-10 10:57:32.554878] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1d5e686b 00:30:03.642 [2024-06-10 10:57:32.554901] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.642 [2024-06-10 10:57:32.554905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:03.642 [2024-06-10 10:57:32.554913] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1d5e686b 00:30:03.642 [2024-06-10 10:57:32.554918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x1d5e686b 00:30:03.642 [2024-06-10 10:57:32.554923] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1d5e686b 00:30:03.642 [2024-06-10 10:57:32.554947] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.642 [2024-06-10 10:57:32.554952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:03.642 [2024-06-10 10:57:32.554966] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1d5e686b 00:30:03.642 ===================================================== 00:30:03.642 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:03.642 ===================================================== 00:30:03.642 Controller Capabilities/Features 00:30:03.642 ================================ 00:30:03.642 Vendor ID: 0000 00:30:03.642 Subsystem Vendor ID: 0000 00:30:03.642 Serial Number: .................... 00:30:03.642 Model Number: ........................................ 00:30:03.642 Firmware Version: 24.09 00:30:03.642 Recommended Arb Burst: 0 00:30:03.642 IEEE OUI Identifier: 00 00 00 00:30:03.642 Multi-path I/O 00:30:03.642 May have multiple subsystem ports: No 00:30:03.642 May have multiple controllers: No 00:30:03.642 Associated with SR-IOV VF: No 00:30:03.642 Max Data Transfer Size: 131072 00:30:03.642 Max Number of Namespaces: 0 00:30:03.642 Max Number of I/O Queues: 1024 00:30:03.642 NVMe Specification Version (VS): 1.3 00:30:03.642 NVMe Specification Version (Identify): 1.3 00:30:03.642 Maximum Queue Entries: 128 00:30:03.642 Contiguous Queues Required: Yes 00:30:03.642 Arbitration Mechanisms Supported 00:30:03.642 Weighted Round Robin: Not Supported 00:30:03.642 Vendor Specific: Not Supported 00:30:03.642 Reset Timeout: 15000 ms 00:30:03.642 Doorbell Stride: 4 bytes 00:30:03.642 NVM Subsystem Reset: Not Supported 00:30:03.642 Command Sets Supported 00:30:03.642 NVM Command Set: Supported 00:30:03.642 Boot Partition: Not Supported 00:30:03.642 Memory Page Size Minimum: 4096 bytes 00:30:03.642 Memory Page Size Maximum: 4096 bytes 00:30:03.642 Persistent Memory Region: Not Supported 00:30:03.642 Optional Asynchronous Events Supported 00:30:03.642 Namespace Attribute Notices: Not Supported 00:30:03.642 Firmware Activation Notices: Not Supported 00:30:03.642 ANA Change Notices: Not Supported 00:30:03.642 PLE Aggregate Log Change Notices: Not Supported 00:30:03.642 LBA Status Info Alert Notices: Not Supported 00:30:03.642 EGE Aggregate Log Change Notices: Not Supported 00:30:03.642 Normal NVM Subsystem Shutdown event: Not Supported 00:30:03.642 Zone Descriptor Change Notices: Not Supported 00:30:03.642 Discovery Log Change Notices: Supported 00:30:03.642 Controller Attributes 00:30:03.642 128-bit Host Identifier: Not Supported 00:30:03.642 Non-Operational Permissive Mode: Not Supported 00:30:03.642 NVM Sets: Not Supported 00:30:03.642 Read Recovery Levels: Not Supported 00:30:03.642 Endurance Groups: Not Supported 00:30:03.642 Predictable Latency Mode: Not Supported 00:30:03.642 Traffic Based Keep ALive: Not Supported 00:30:03.642 Namespace Granularity: Not Supported 00:30:03.642 SQ Associations: Not Supported 00:30:03.642 UUID List: Not Supported 00:30:03.642 Multi-Domain Subsystem: Not Supported 00:30:03.642 Fixed Capacity Management: Not Supported 00:30:03.642 Variable Capacity Management: Not Supported 00:30:03.642 Delete Endurance Group: Not Supported 00:30:03.642 Delete NVM Set: Not Supported 00:30:03.642 Extended LBA Formats Supported: Not Supported 00:30:03.642 Flexible Data Placement Supported: Not Supported 00:30:03.642 00:30:03.642 Controller Memory Buffer Support 00:30:03.642 ================================ 00:30:03.642 Supported: No 00:30:03.642 00:30:03.642 Persistent Memory Region Support 00:30:03.642 ================================ 00:30:03.642 Supported: No 00:30:03.642 00:30:03.642 Admin Command Set Attributes 00:30:03.642 ============================ 00:30:03.642 Security Send/Receive: Not Supported 00:30:03.642 Format NVM: Not Supported 00:30:03.642 Firmware Activate/Download: Not Supported 00:30:03.642 Namespace Management: Not Supported 00:30:03.642 Device Self-Test: Not Supported 00:30:03.642 Directives: Not Supported 00:30:03.642 NVMe-MI: Not Supported 00:30:03.642 Virtualization Management: Not Supported 00:30:03.642 Doorbell Buffer Config: Not Supported 00:30:03.642 Get LBA Status Capability: Not Supported 00:30:03.642 Command & Feature Lockdown Capability: Not Supported 00:30:03.642 Abort Command Limit: 1 00:30:03.642 Async Event Request Limit: 4 00:30:03.642 Number of Firmware Slots: N/A 00:30:03.642 Firmware Slot 1 Read-Only: N/A 00:30:03.642 Firmware Activation Without Reset: N/A 00:30:03.642 Multiple Update Detection Support: N/A 00:30:03.642 Firmware Update Granularity: No Information Provided 00:30:03.642 Per-Namespace SMART Log: No 00:30:03.642 Asymmetric Namespace Access Log Page: Not Supported 00:30:03.642 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:03.642 Command Effects Log Page: Not Supported 00:30:03.642 Get Log Page Extended Data: Supported 00:30:03.642 Telemetry Log Pages: Not Supported 00:30:03.642 Persistent Event Log Pages: Not Supported 00:30:03.642 Supported Log Pages Log Page: May Support 00:30:03.642 Commands Supported & Effects Log Page: Not Supported 00:30:03.642 Feature Identifiers & Effects Log Page:May Support 00:30:03.642 NVMe-MI Commands & Effects Log Page: May Support 00:30:03.642 Data Area 4 for Telemetry Log: Not Supported 00:30:03.642 Error Log Page Entries Supported: 128 00:30:03.642 Keep Alive: Not Supported 00:30:03.642 00:30:03.642 NVM Command Set Attributes 00:30:03.642 ========================== 00:30:03.642 Submission Queue Entry Size 00:30:03.642 Max: 1 00:30:03.642 Min: 1 00:30:03.642 Completion Queue Entry Size 00:30:03.642 Max: 1 00:30:03.642 Min: 1 00:30:03.642 Number of Namespaces: 0 00:30:03.642 Compare Command: Not Supported 00:30:03.642 Write Uncorrectable Command: Not Supported 00:30:03.642 Dataset Management Command: Not Supported 00:30:03.642 Write Zeroes Command: Not Supported 00:30:03.642 Set Features Save Field: Not Supported 00:30:03.642 Reservations: Not Supported 00:30:03.642 Timestamp: Not Supported 00:30:03.642 Copy: Not Supported 00:30:03.642 Volatile Write Cache: Not Present 00:30:03.642 Atomic Write Unit (Normal): 1 00:30:03.642 Atomic Write Unit (PFail): 1 00:30:03.642 Atomic Compare & Write Unit: 1 00:30:03.642 Fused Compare & Write: Supported 00:30:03.642 Scatter-Gather List 00:30:03.642 SGL Command Set: Supported 00:30:03.642 SGL Keyed: Supported 00:30:03.642 SGL Bit Bucket Descriptor: Not Supported 00:30:03.642 SGL Metadata Pointer: Not Supported 00:30:03.642 Oversized SGL: Not Supported 00:30:03.642 SGL Metadata Address: Not Supported 00:30:03.642 SGL Offset: Supported 00:30:03.642 Transport SGL Data Block: Not Supported 00:30:03.642 Replay Protected Memory Block: Not Supported 00:30:03.642 00:30:03.642 Firmware Slot Information 00:30:03.642 ========================= 00:30:03.642 Active slot: 0 00:30:03.642 00:30:03.642 00:30:03.642 Error Log 00:30:03.642 ========= 00:30:03.642 00:30:03.642 Active Namespaces 00:30:03.642 ================= 00:30:03.642 Discovery Log Page 00:30:03.642 ================== 00:30:03.642 Generation Counter: 2 00:30:03.642 Number of Records: 2 00:30:03.642 Record Format: 0 00:30:03.642 00:30:03.643 Discovery Log Entry 0 00:30:03.643 ---------------------- 00:30:03.643 Transport Type: 1 (RDMA) 00:30:03.643 Address Family: 1 (IPv4) 00:30:03.643 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:03.643 Entry Flags: 00:30:03.643 Duplicate Returned Information: 1 00:30:03.643 Explicit Persistent Connection Support for Discovery: 1 00:30:03.643 Transport Requirements: 00:30:03.643 Secure Channel: Not Required 00:30:03.643 Port ID: 0 (0x0000) 00:30:03.643 Controller ID: 65535 (0xffff) 00:30:03.643 Admin Max SQ Size: 128 00:30:03.643 Transport Service Identifier: 4420 00:30:03.643 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:03.643 Transport Address: 192.168.100.8 00:30:03.643 Transport Specific Address Subtype - RDMA 00:30:03.643 RDMA QP Service Type: 1 (Reliable Connected) 00:30:03.643 RDMA Provider Type: 1 (No provider specified) 00:30:03.643 RDMA CM Service: 1 (RDMA_CM) 00:30:03.643 Discovery Log Entry 1 00:30:03.643 ---------------------- 00:30:03.643 Transport Type: 1 (RDMA) 00:30:03.643 Address Family: 1 (IPv4) 00:30:03.643 Subsystem Type: 2 (NVM Subsystem) 00:30:03.643 Entry Flags: 00:30:03.643 Duplicate Returned Information: 0 00:30:03.643 Explicit Persistent Connection Support for Discovery: 0 00:30:03.643 Transport Requirements: 00:30:03.643 Secure Channel: Not Required 00:30:03.643 Port ID: 0 (0x0000) 00:30:03.643 Controller ID: 65535 (0xffff) 00:30:03.643 Admin Max SQ Size: [2024-06-10 10:57:32.555031] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:03.643 [2024-06-10 10:57:32.555039] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 49676 doesn't match qid 00:30:03.643 [2024-06-10 10:57:32.555050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:5 sqhd:7390 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555055] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 49676 doesn't match qid 00:30:03.643 [2024-06-10 10:57:32.555061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:5 sqhd:7390 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555065] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 49676 doesn't match qid 00:30:03.643 [2024-06-10 10:57:32.555071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:5 sqhd:7390 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555075] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 49676 doesn't match qid 00:30:03.643 [2024-06-10 10:57:32.555081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32533 cdw0:5 sqhd:7390 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555088] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555122] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555134] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555144] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555168] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555177] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:03.643 [2024-06-10 10:57:32.555180] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:03.643 [2024-06-10 10:57:32.555184] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555191] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555223] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555232] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555239] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555276] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555285] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555291] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555319] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555329] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555337] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555376] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555386] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555393] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555423] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555432] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555439] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555471] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555480] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555487] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555518] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555527] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555533] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555572] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555581] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555588] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555620] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.643 [2024-06-10 10:57:32.555624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:30:03.643 [2024-06-10 10:57:32.555628] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555634] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.643 [2024-06-10 10:57:32.555640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.643 [2024-06-10 10:57:32.555662] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.555666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.555671] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555677] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.555713] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.555717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.555721] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555727] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.555762] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.555766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.555770] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555776] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.555806] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.555810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.555815] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555821] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.555856] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.555860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.555865] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555871] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.555902] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.555906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.555911] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555917] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.555952] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.555962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.555967] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555976] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.555981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.556010] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.556014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.556018] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556025] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.556057] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.556061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.556066] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556072] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.556101] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.556105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.556109] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556116] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.556145] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.556149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.556154] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556161] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.556190] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.556194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.556198] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556205] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.556236] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.556240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.556247] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556253] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.556284] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.556288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.556292] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556299] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.644 [2024-06-10 10:57:32.556305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.644 [2024-06-10 10:57:32.556334] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.644 [2024-06-10 10:57:32.556338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:30:03.644 [2024-06-10 10:57:32.556342] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556349] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556381] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556390] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556396] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556428] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556437] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556444] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556476] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556484] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556491] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556526] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556536] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556542] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556573] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556581] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556588] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556619] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556627] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556634] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556668] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556676] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556683] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556718] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556726] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556733] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556762] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556771] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556777] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556815] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556825] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556832] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556862] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556871] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556909] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556918] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556924] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.556961] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.556965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.556970] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556977] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.556982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.557006] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.557010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.557014] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557021] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.557051] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.557056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.557060] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557067] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.557104] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.557109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.557113] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557120] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.557155] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.557159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.557164] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557170] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.645 [2024-06-10 10:57:32.557204] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.645 [2024-06-10 10:57:32.557209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:03.645 [2024-06-10 10:57:32.557213] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1d5e686b 00:30:03.645 [2024-06-10 10:57:32.557219] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557253] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557262] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557268] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557296] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557304] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557311] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557339] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557347] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557354] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557389] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557397] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557403] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557435] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557443] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557450] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557479] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557488] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557494] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557525] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557533] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557540] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557576] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557585] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557592] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557628] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557637] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557644] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557675] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557684] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557691] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557720] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557728] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557735] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557767] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557776] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557782] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557816] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557824] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557831] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557867] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557875] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557882] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.557911] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.557915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.557920] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557927] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.557933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.561963] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.561969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.561974] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.561980] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.561986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.646 [2024-06-10 10:57:32.562013] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.646 [2024-06-10 10:57:32.562017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:30:03.646 [2024-06-10 10:57:32.562022] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1d5e686b 00:30:03.646 [2024-06-10 10:57:32.562026] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:30:03.646 128 00:30:03.646 Transport Service Identifier: 4420 00:30:03.646 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:03.646 Transport Address: 192.168.100.8 00:30:03.646 Transport Specific Address Subtype - RDMA 00:30:03.646 RDMA QP Service Type: 1 (Reliable Connected) 00:30:03.646 RDMA Provider Type: 1 (No provider specified) 00:30:03.646 RDMA CM Service: 1 (RDMA_CM) 00:30:03.647 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:03.647 [2024-06-10 10:57:32.625042] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:30:03.647 [2024-06-10 10:57:32.625089] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38591 ] 00:30:03.647 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.647 [2024-06-10 10:57:32.656703] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:03.647 [2024-06-10 10:57:32.656767] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:30:03.647 [2024-06-10 10:57:32.656780] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:30:03.647 [2024-06-10 10:57:32.656783] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:30:03.647 [2024-06-10 10:57:32.656805] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:03.910 [2024-06-10 10:57:32.671205] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:30:03.910 [2024-06-10 10:57:32.681482] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:03.910 [2024-06-10 10:57:32.681492] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:30:03.910 [2024-06-10 10:57:32.681498] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681504] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681511] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681516] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681520] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681525] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681529] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681534] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681538] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681542] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681547] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681551] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681555] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681559] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681564] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681568] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681572] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681576] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681580] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681584] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681588] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681593] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681597] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681601] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681605] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681609] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681613] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x489db9d1 00:30:03.910 [2024-06-10 10:57:32.681618] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.681622] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.681626] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.681630] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.681634] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:30:03.911 [2024-06-10 10:57:32.681638] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:03.911 [2024-06-10 10:57:32.681642] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:30:03.911 [2024-06-10 10:57:32.681655] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.681665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.686963] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.686972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.686978] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.686983] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:03.911 [2024-06-10 10:57:32.686988] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:03.911 [2024-06-10 10:57:32.686993] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:03.911 [2024-06-10 10:57:32.687001] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.911 [2024-06-10 10:57:32.687039] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.687043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.687048] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:03.911 [2024-06-10 10:57:32.687052] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687056] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:03.911 [2024-06-10 10:57:32.687062] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.911 [2024-06-10 10:57:32.687093] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.687097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.687102] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:03.911 [2024-06-10 10:57:32.687106] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687110] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:03.911 [2024-06-10 10:57:32.687116] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.911 [2024-06-10 10:57:32.687146] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.687150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.687154] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:03.911 [2024-06-10 10:57:32.687158] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687166] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.911 [2024-06-10 10:57:32.687199] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.687203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.687208] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:03.911 [2024-06-10 10:57:32.687211] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:03.911 [2024-06-10 10:57:32.687215] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:03.911 [2024-06-10 10:57:32.687325] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:03.911 [2024-06-10 10:57:32.687328] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:03.911 [2024-06-10 10:57:32.687334] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.911 [2024-06-10 10:57:32.687365] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.687369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.687373] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:03.911 [2024-06-10 10:57:32.687377] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687383] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.911 [2024-06-10 10:57:32.687414] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.687418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.687422] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:03.911 [2024-06-10 10:57:32.687426] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:03.911 [2024-06-10 10:57:32.687430] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687435] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:03.911 [2024-06-10 10:57:32.687444] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:03.911 [2024-06-10 10:57:32.687451] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687498] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.687502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.687508] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:03.911 [2024-06-10 10:57:32.687512] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:03.911 [2024-06-10 10:57:32.687516] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:03.911 [2024-06-10 10:57:32.687519] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 6 00:30:03.911 [2024-06-10 10:57:32.687523] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:03.911 [2024-06-10 10:57:32.687527] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:03.911 [2024-06-10 10:57:32.687531] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687538] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:03.911 [2024-06-10 10:57:32.687545] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.911 [2024-06-10 10:57:32.687578] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.911 [2024-06-10 10:57:32.687582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:03.911 [2024-06-10 10:57:32.687590] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.911 [2024-06-10 10:57:32.687600] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.911 [2024-06-10 10:57:32.687610] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.911 [2024-06-10 10:57:32.687620] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x489db9d1 00:30:03.911 [2024-06-10 10:57:32.687625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.911 [2024-06-10 10:57:32.687629] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:03.911 [2024-06-10 10:57:32.687633] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687639] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687644] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.912 [2024-06-10 10:57:32.687673] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.687678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.687682] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:03.912 [2024-06-10 10:57:32.687687] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687691] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687696] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687707] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.912 [2024-06-10 10:57:32.687742] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.687746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.687787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687791] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687797] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687803] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687842] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.687846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.687859] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:03.912 [2024-06-10 10:57:32.687866] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687870] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687875] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687882] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687924] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.687928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.687936] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687941] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687947] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.687953] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.687996] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.688000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.688008] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.688012] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688017] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.688023] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.688028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.688032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.688036] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:03.912 [2024-06-10 10:57:32.688040] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:03.912 [2024-06-10 10:57:32.688044] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:03.912 [2024-06-10 10:57:32.688056] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.912 [2024-06-10 10:57:32.688068] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:03.912 [2024-06-10 10:57:32.688090] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.688095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.688099] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688105] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.912 [2024-06-10 10:57:32.688116] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.688120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.688124] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688143] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.688147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.688152] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688158] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.912 [2024-06-10 10:57:32.688190] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.688194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.688198] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688205] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.912 [2024-06-10 10:57:32.688235] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.688239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.688243] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688251] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688264] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688275] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688289] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688301] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.688305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.688313] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x489db9d1 00:30:03.912 [2024-06-10 10:57:32.688329] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.912 [2024-06-10 10:57:32.688334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:03.912 [2024-06-10 10:57:32.688340] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x489db9d1 00:30:03.913 [2024-06-10 10:57:32.688346] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.913 [2024-06-10 10:57:32.688350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:03.913 [2024-06-10 10:57:32.688357] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x489db9d1 00:30:03.913 [2024-06-10 10:57:32.688361] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.913 [2024-06-10 10:57:32.688365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:03.913 [2024-06-10 10:57:32.688373] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x489db9d1 00:30:03.913 ===================================================== 00:30:03.913 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.913 ===================================================== 00:30:03.913 Controller Capabilities/Features 00:30:03.913 ================================ 00:30:03.913 Vendor ID: 8086 00:30:03.913 Subsystem Vendor ID: 8086 00:30:03.913 Serial Number: SPDK00000000000001 00:30:03.913 Model Number: SPDK bdev Controller 00:30:03.913 Firmware Version: 24.09 00:30:03.913 Recommended Arb Burst: 6 00:30:03.913 IEEE OUI Identifier: e4 d2 5c 00:30:03.913 Multi-path I/O 00:30:03.913 May have multiple subsystem ports: Yes 00:30:03.913 May have multiple controllers: Yes 00:30:03.913 Associated with SR-IOV VF: No 00:30:03.913 Max Data Transfer Size: 131072 00:30:03.913 Max Number of Namespaces: 32 00:30:03.913 Max Number of I/O Queues: 127 00:30:03.913 NVMe Specification Version (VS): 1.3 00:30:03.913 NVMe Specification Version (Identify): 1.3 00:30:03.913 Maximum Queue Entries: 128 00:30:03.913 Contiguous Queues Required: Yes 00:30:03.913 Arbitration Mechanisms Supported 00:30:03.913 Weighted Round Robin: Not Supported 00:30:03.913 Vendor Specific: Not Supported 00:30:03.913 Reset Timeout: 15000 ms 00:30:03.913 Doorbell Stride: 4 bytes 00:30:03.913 NVM Subsystem Reset: Not Supported 00:30:03.913 Command Sets Supported 00:30:03.913 NVM Command Set: Supported 00:30:03.913 Boot Partition: Not Supported 00:30:03.913 Memory Page Size Minimum: 4096 bytes 00:30:03.913 Memory Page Size Maximum: 4096 bytes 00:30:03.913 Persistent Memory Region: Not Supported 00:30:03.913 Optional Asynchronous Events Supported 00:30:03.913 Namespace Attribute Notices: Supported 00:30:03.913 Firmware Activation Notices: Not Supported 00:30:03.913 ANA Change Notices: Not Supported 00:30:03.913 PLE Aggregate Log Change Notices: Not Supported 00:30:03.913 LBA Status Info Alert Notices: Not Supported 00:30:03.913 EGE Aggregate Log Change Notices: Not Supported 00:30:03.913 Normal NVM Subsystem Shutdown event: Not Supported 00:30:03.913 Zone Descriptor Change Notices: Not Supported 00:30:03.913 Discovery Log Change Notices: Not Supported 00:30:03.913 Controller Attributes 00:30:03.913 128-bit Host Identifier: Supported 00:30:03.913 Non-Operational Permissive Mode: Not Supported 00:30:03.913 NVM Sets: Not Supported 00:30:03.913 Read Recovery Levels: Not Supported 00:30:03.913 Endurance Groups: Not Supported 00:30:03.913 Predictable Latency Mode: Not Supported 00:30:03.913 Traffic Based Keep ALive: Not Supported 00:30:03.913 Namespace Granularity: Not Supported 00:30:03.913 SQ Associations: Not Supported 00:30:03.913 UUID List: Not Supported 00:30:03.913 Multi-Domain Subsystem: Not Supported 00:30:03.913 Fixed Capacity Management: Not Supported 00:30:03.913 Variable Capacity Management: Not Supported 00:30:03.913 Delete Endurance Group: Not Supported 00:30:03.913 Delete NVM Set: Not Supported 00:30:03.913 Extended LBA Formats Supported: Not Supported 00:30:03.913 Flexible Data Placement Supported: Not Supported 00:30:03.913 00:30:03.913 Controller Memory Buffer Support 00:30:03.913 ================================ 00:30:03.913 Supported: No 00:30:03.913 00:30:03.913 Persistent Memory Region Support 00:30:03.913 ================================ 00:30:03.913 Supported: No 00:30:03.913 00:30:03.913 Admin Command Set Attributes 00:30:03.913 ============================ 00:30:03.913 Security Send/Receive: Not Supported 00:30:03.913 Format NVM: Not Supported 00:30:03.913 Firmware Activate/Download: Not Supported 00:30:03.913 Namespace Management: Not Supported 00:30:03.913 Device Self-Test: Not Supported 00:30:03.913 Directives: Not Supported 00:30:03.913 NVMe-MI: Not Supported 00:30:03.913 Virtualization Management: Not Supported 00:30:03.913 Doorbell Buffer Config: Not Supported 00:30:03.913 Get LBA Status Capability: Not Supported 00:30:03.913 Command & Feature Lockdown Capability: Not Supported 00:30:03.913 Abort Command Limit: 4 00:30:03.913 Async Event Request Limit: 4 00:30:03.913 Number of Firmware Slots: N/A 00:30:03.913 Firmware Slot 1 Read-Only: N/A 00:30:03.913 Firmware Activation Without Reset: N/A 00:30:03.913 Multiple Update Detection Support: N/A 00:30:03.913 Firmware Update Granularity: No Information Provided 00:30:03.913 Per-Namespace SMART Log: No 00:30:03.913 Asymmetric Namespace Access Log Page: Not Supported 00:30:03.913 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:03.913 Command Effects Log Page: Supported 00:30:03.913 Get Log Page Extended Data: Supported 00:30:03.913 Telemetry Log Pages: Not Supported 00:30:03.913 Persistent Event Log Pages: Not Supported 00:30:03.913 Supported Log Pages Log Page: May Support 00:30:03.913 Commands Supported & Effects Log Page: Not Supported 00:30:03.913 Feature Identifiers & Effects Log Page:May Support 00:30:03.913 NVMe-MI Commands & Effects Log Page: May Support 00:30:03.913 Data Area 4 for Telemetry Log: Not Supported 00:30:03.913 Error Log Page Entries Supported: 128 00:30:03.913 Keep Alive: Supported 00:30:03.913 Keep Alive Granularity: 10000 ms 00:30:03.913 00:30:03.913 NVM Command Set Attributes 00:30:03.913 ========================== 00:30:03.913 Submission Queue Entry Size 00:30:03.913 Max: 64 00:30:03.913 Min: 64 00:30:03.913 Completion Queue Entry Size 00:30:03.913 Max: 16 00:30:03.913 Min: 16 00:30:03.913 Number of Namespaces: 32 00:30:03.913 Compare Command: Supported 00:30:03.913 Write Uncorrectable Command: Not Supported 00:30:03.913 Dataset Management Command: Supported 00:30:03.913 Write Zeroes Command: Supported 00:30:03.913 Set Features Save Field: Not Supported 00:30:03.913 Reservations: Supported 00:30:03.913 Timestamp: Not Supported 00:30:03.913 Copy: Supported 00:30:03.913 Volatile Write Cache: Present 00:30:03.913 Atomic Write Unit (Normal): 1 00:30:03.913 Atomic Write Unit (PFail): 1 00:30:03.913 Atomic Compare & Write Unit: 1 00:30:03.913 Fused Compare & Write: Supported 00:30:03.913 Scatter-Gather List 00:30:03.913 SGL Command Set: Supported 00:30:03.913 SGL Keyed: Supported 00:30:03.913 SGL Bit Bucket Descriptor: Not Supported 00:30:03.913 SGL Metadata Pointer: Not Supported 00:30:03.913 Oversized SGL: Not Supported 00:30:03.913 SGL Metadata Address: Not Supported 00:30:03.913 SGL Offset: Supported 00:30:03.913 Transport SGL Data Block: Not Supported 00:30:03.913 Replay Protected Memory Block: Not Supported 00:30:03.913 00:30:03.913 Firmware Slot Information 00:30:03.913 ========================= 00:30:03.913 Active slot: 1 00:30:03.913 Slot 1 Firmware Revision: 24.09 00:30:03.913 00:30:03.913 00:30:03.913 Commands Supported and Effects 00:30:03.913 ============================== 00:30:03.913 Admin Commands 00:30:03.913 -------------- 00:30:03.913 Get Log Page (02h): Supported 00:30:03.913 Identify (06h): Supported 00:30:03.913 Abort (08h): Supported 00:30:03.913 Set Features (09h): Supported 00:30:03.913 Get Features (0Ah): Supported 00:30:03.913 Asynchronous Event Request (0Ch): Supported 00:30:03.913 Keep Alive (18h): Supported 00:30:03.913 I/O Commands 00:30:03.913 ------------ 00:30:03.913 Flush (00h): Supported LBA-Change 00:30:03.913 Write (01h): Supported LBA-Change 00:30:03.913 Read (02h): Supported 00:30:03.913 Compare (05h): Supported 00:30:03.913 Write Zeroes (08h): Supported LBA-Change 00:30:03.913 Dataset Management (09h): Supported LBA-Change 00:30:03.913 Copy (19h): Supported LBA-Change 00:30:03.913 Unknown (79h): Supported LBA-Change 00:30:03.913 Unknown (7Ah): Supported 00:30:03.913 00:30:03.913 Error Log 00:30:03.913 ========= 00:30:03.913 00:30:03.913 Arbitration 00:30:03.913 =========== 00:30:03.913 Arbitration Burst: 1 00:30:03.913 00:30:03.913 Power Management 00:30:03.913 ================ 00:30:03.913 Number of Power States: 1 00:30:03.913 Current Power State: Power State #0 00:30:03.913 Power State #0: 00:30:03.913 Max Power: 0.00 W 00:30:03.913 Non-Operational State: Operational 00:30:03.913 Entry Latency: Not Reported 00:30:03.913 Exit Latency: Not Reported 00:30:03.913 Relative Read Throughput: 0 00:30:03.913 Relative Read Latency: 0 00:30:03.913 Relative Write Throughput: 0 00:30:03.913 Relative Write Latency: 0 00:30:03.913 Idle Power: Not Reported 00:30:03.913 Active Power: Not Reported 00:30:03.914 Non-Operational Permissive Mode: Not Supported 00:30:03.914 00:30:03.914 Health Information 00:30:03.914 ================== 00:30:03.914 Critical Warnings: 00:30:03.914 Available Spare Space: OK 00:30:03.914 Temperature: OK 00:30:03.914 Device Reliability: OK 00:30:03.914 Read Only: No 00:30:03.914 Volatile Memory Backup: OK 00:30:03.914 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:03.914 Temperature Threshold: [2024-06-10 10:57:32.688445] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688477] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688486] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688504] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:03.914 [2024-06-10 10:57:32.688511] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51436 doesn't match qid 00:30:03.914 [2024-06-10 10:57:32.688523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32659 cdw0:5 sqhd:2390 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688527] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51436 doesn't match qid 00:30:03.914 [2024-06-10 10:57:32.688533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32659 cdw0:5 sqhd:2390 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688538] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51436 doesn't match qid 00:30:03.914 [2024-06-10 10:57:32.688544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32659 cdw0:5 sqhd:2390 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688548] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51436 doesn't match qid 00:30:03.914 [2024-06-10 10:57:32.688554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32659 cdw0:5 sqhd:2390 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688561] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688592] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688602] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688612] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688638] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688646] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:03.914 [2024-06-10 10:57:32.688651] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:03.914 [2024-06-10 10:57:32.688655] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688661] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688693] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688701] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688708] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688737] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688747] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688753] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688788] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688797] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688804] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688835] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688846] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688852] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688881] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688890] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688896] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688926] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688934] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688941] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.688977] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.688981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.688986] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688992] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.688998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.689021] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.689025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.689030] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.689036] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.689042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.689064] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.689068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.689072] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.689079] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.689085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.689107] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.689111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.689115] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.689121] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.689127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.689155] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.689159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.689163] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.689170] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.914 [2024-06-10 10:57:32.689176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.914 [2024-06-10 10:57:32.689199] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.914 [2024-06-10 10:57:32.689203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:03.914 [2024-06-10 10:57:32.689207] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x489db9d1 00:30:03.915 [2024-06-10 10:57:32.689214] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.915 [2024-06-10 10:57:32.689220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.915 [2024-06-10 10:57:32.689249] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.915 [2024-06-10 10:57:32.689253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:03.915 [2024-06-10 10:57:32.689257] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x489db9d1 00:30:03.915 [2024-06-10 10:57:32.689264] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.915 [2024-06-10 10:57:32.689270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.915 [2024-06-10 10:57:32.689294] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.915 [2024-06-10 10:57:32.689298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:30:03.915 [2024-06-10 10:57:32.689302] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x489db9d1 00:30:03.915 [2024-06-10 10:57:32.689309] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.915 [2024-06-10 10:57:32.689315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689340] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689348] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689354] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689392] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689400] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689407] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689436] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689444] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689451] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689484] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689492] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689499] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689531] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689539] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689546] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689573] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689582] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689588] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689622] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689631] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689637] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689669] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689678] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689684] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689722] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689730] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689738] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689766] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689774] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689780] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689817] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689825] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689831] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689864] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689872] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689879] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689912] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689920] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689927] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.689964] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.689968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.689972] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689979] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.689985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.690007] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.690011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.690015] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.690023] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.690029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.690051] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.690055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.690059] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.690066] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.690071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.916 [2024-06-10 10:57:32.690100] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.916 [2024-06-10 10:57:32.690104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:30:03.916 [2024-06-10 10:57:32.690109] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x489db9d1 00:30:03.916 [2024-06-10 10:57:32.690115] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690142] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690150] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690157] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690185] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690193] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690200] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690229] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690237] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690244] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690274] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690284] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690291] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690327] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690336] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690342] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690370] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690378] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690385] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690415] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690423] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690430] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690463] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690471] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690478] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690510] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690518] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690525] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690553] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690564] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690570] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690605] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690613] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690620] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690648] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690656] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690662] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690692] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690700] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690707] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690741] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690750] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690756] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690790] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690798] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690805] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690835] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690845] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690851] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690886] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690895] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690901] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.917 [2024-06-10 10:57:32.690934] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.917 [2024-06-10 10:57:32.690938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:30:03.917 [2024-06-10 10:57:32.690942] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x489db9d1 00:30:03.917 [2024-06-10 10:57:32.690948] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.918 [2024-06-10 10:57:32.694961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.918 [2024-06-10 10:57:32.694971] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.918 [2024-06-10 10:57:32.694975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:30:03.918 [2024-06-10 10:57:32.694979] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x489db9d1 00:30:03.918 [2024-06-10 10:57:32.694986] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x489db9d1 00:30:03.918 [2024-06-10 10:57:32.694992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:03.918 [2024-06-10 10:57:32.695017] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:03.918 [2024-06-10 10:57:32.695021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000d p:0 m:0 dnr:0 00:30:03.918 [2024-06-10 10:57:32.695025] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x489db9d1 00:30:03.918 [2024-06-10 10:57:32.695030] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:30:03.918 0 Kelvin (-273 Celsius) 00:30:03.918 Available Spare: 0% 00:30:03.918 Available Spare Threshold: 0% 00:30:03.918 Life Percentage Used: 0% 00:30:03.918 Data Units Read: 0 00:30:03.918 Data Units Written: 0 00:30:03.918 Host Read Commands: 0 00:30:03.918 Host Write Commands: 0 00:30:03.918 Controller Busy Time: 0 minutes 00:30:03.918 Power Cycles: 0 00:30:03.918 Power On Hours: 0 hours 00:30:03.918 Unsafe Shutdowns: 0 00:30:03.918 Unrecoverable Media Errors: 0 00:30:03.918 Lifetime Error Log Entries: 0 00:30:03.918 Warning Temperature Time: 0 minutes 00:30:03.918 Critical Temperature Time: 0 minutes 00:30:03.918 00:30:03.918 Number of Queues 00:30:03.918 ================ 00:30:03.918 Number of I/O Submission Queues: 127 00:30:03.918 Number of I/O Completion Queues: 127 00:30:03.918 00:30:03.918 Active Namespaces 00:30:03.918 ================= 00:30:03.918 Namespace ID:1 00:30:03.918 Error Recovery Timeout: Unlimited 00:30:03.918 Command Set Identifier: NVM (00h) 00:30:03.918 Deallocate: Supported 00:30:03.918 Deallocated/Unwritten Error: Not Supported 00:30:03.918 Deallocated Read Value: Unknown 00:30:03.918 Deallocate in Write Zeroes: Not Supported 00:30:03.918 Deallocated Guard Field: 0xFFFF 00:30:03.918 Flush: Supported 00:30:03.918 Reservation: Supported 00:30:03.918 Namespace Sharing Capabilities: Multiple Controllers 00:30:03.918 Size (in LBAs): 131072 (0GiB) 00:30:03.918 Capacity (in LBAs): 131072 (0GiB) 00:30:03.918 Utilization (in LBAs): 131072 (0GiB) 00:30:03.918 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:03.918 EUI64: ABCDEF0123456789 00:30:03.918 UUID: 0e8bd3d1-2382-4b08-b526-3224ebca98a7 00:30:03.918 Thin Provisioning: Not Supported 00:30:03.918 Per-NS Atomic Units: Yes 00:30:03.918 Atomic Boundary Size (Normal): 0 00:30:03.918 Atomic Boundary Size (PFail): 0 00:30:03.918 Atomic Boundary Offset: 0 00:30:03.918 Maximum Single Source Range Length: 65535 00:30:03.918 Maximum Copy Length: 65535 00:30:03.918 Maximum Source Range Count: 1 00:30:03.918 NGUID/EUI64 Never Reused: No 00:30:03.918 Namespace Write Protected: No 00:30:03.918 Number of LBA Formats: 1 00:30:03.918 Current LBA Format: LBA Format #00 00:30:03.918 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:03.918 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:03.918 rmmod nvme_rdma 00:30:03.918 rmmod nvme_fabrics 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 38454 ']' 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 38454 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 38454 ']' 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 38454 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 38454 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 38454' 00:30:03.918 killing process with pid 38454 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@968 -- # kill 38454 00:30:03.918 10:57:32 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@973 -- # wait 38454 00:30:04.177 10:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:04.177 10:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:04.177 00:30:04.177 real 0m7.505s 00:30:04.177 user 0m7.433s 00:30:04.177 sys 0m4.707s 00:30:04.177 10:57:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:04.177 10:57:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:04.177 ************************************ 00:30:04.177 END TEST nvmf_identify 00:30:04.177 ************************************ 00:30:04.177 10:57:33 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:30:04.177 10:57:33 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:04.177 10:57:33 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:04.177 10:57:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:04.177 ************************************ 00:30:04.177 START TEST nvmf_perf 00:30:04.177 ************************************ 00:30:04.177 10:57:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:30:04.177 * Looking for test storage... 00:30:04.437 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:04.437 10:57:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:11.043 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:11.043 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@377 -- # modinfo irdma 00:30:11.043 10:57:38 nvmf_rdma.nvmf_perf -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:11.043 Found net devices under 0000:af:00.0: cvl_0_0 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:11.043 Found net devices under 0000:af:00.1: cvl_0_1 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.043 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo cvl_0_0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo cvl_0_1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:30:11.044 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:30:11.044 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:30:11.044 altname enp175s0f0np0 00:30:11.044 altname ens801f0np0 00:30:11.044 inet 192.168.100.8/24 scope global cvl_0_0 00:30:11.044 valid_lft forever preferred_lft forever 00:30:11.044 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:30:11.044 valid_lft forever preferred_lft forever 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:30:11.044 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:30:11.044 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:30:11.044 altname enp175s0f1np1 00:30:11.044 altname ens801f1np1 00:30:11.044 inet 192.168.100.9/24 scope global cvl_0_1 00:30:11.044 valid_lft forever preferred_lft forever 00:30:11.044 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:30:11.044 valid_lft forever preferred_lft forever 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo cvl_0_0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo cvl_0_1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:11.044 192.168.100.9' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:11.044 192.168.100.9' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:11.044 192.168.100.9' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=42032 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 42032 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 42032 ']' 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:11.044 10:57:39 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:11.044 [2024-06-10 10:57:39.223732] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:30:11.045 [2024-06-10 10:57:39.223775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.045 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.045 [2024-06-10 10:57:39.283986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.045 [2024-06-10 10:57:39.362204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.045 [2024-06-10 10:57:39.362239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.045 [2024-06-10 10:57:39.362246] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.045 [2024-06-10 10:57:39.362253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.045 [2024-06-10 10:57:39.362257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.045 [2024-06-10 10:57:39.362299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.045 [2024-06-10 10:57:39.362396] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.045 [2024-06-10 10:57:39.362610] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.045 [2024-06-10 10:57:39.362612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.045 10:57:40 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:11.045 10:57:40 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:30:11.045 10:57:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:11.045 10:57:40 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:11.045 10:57:40 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:11.303 10:57:40 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.303 10:57:40 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:11.303 10:57:40 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:30:14.593 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:30:14.593 [2024-06-10 10:57:43.622463] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:30:14.852 [2024-06-10 10:57:43.635789] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1fd9f40/0x1fd9580) succeed. 00:30:14.852 [2024-06-10 10:57:43.644838] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1fdb2f0/0x1fd9b00) succeed. 00:30:14.852 [2024-06-10 10:57:43.644859] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:30:14.852 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.852 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:14.852 10:57:43 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.110 10:57:44 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:15.110 10:57:44 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:15.369 10:57:44 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:15.627 [2024-06-10 10:57:44.399891] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:15.627 10:57:44 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:15.627 10:57:44 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:15.627 10:57:44 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:15.627 10:57:44 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:15.627 10:57:44 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:17.004 Initializing NVMe Controllers 00:30:17.004 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:17.004 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:17.004 Initialization complete. Launching workers. 00:30:17.004 ======================================================== 00:30:17.004 Latency(us) 00:30:17.004 Device Information : IOPS MiB/s Average min max 00:30:17.004 PCIE (0000:5e:00.0) NSID 1 from core 0: 100411.98 392.23 318.10 34.04 7196.71 00:30:17.004 ======================================================== 00:30:17.004 Total : 100411.98 392.23 318.10 34.04 7196.71 00:30:17.004 00:30:17.004 10:57:45 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:17.004 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.290 Initializing NVMe Controllers 00:30:20.290 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.290 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:20.290 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:20.290 Initialization complete. Launching workers. 00:30:20.290 ======================================================== 00:30:20.290 Latency(us) 00:30:20.290 Device Information : IOPS MiB/s Average min max 00:30:20.290 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6077.00 23.74 164.36 55.51 4092.16 00:30:20.290 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4890.00 19.10 204.30 75.27 4110.08 00:30:20.290 ======================================================== 00:30:20.290 Total : 10967.00 42.84 182.17 55.51 4110.08 00:30:20.290 00:30:20.290 10:57:49 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:20.290 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.575 Initializing NVMe Controllers 00:30:23.575 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.575 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.575 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.575 Initialization complete. Launching workers. 00:30:23.575 ======================================================== 00:30:23.575 Latency(us) 00:30:23.575 Device Information : IOPS MiB/s Average min max 00:30:23.575 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18391.00 71.84 1740.55 457.66 8423.55 00:30:23.575 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3936.00 15.38 8190.02 5907.41 16092.45 00:30:23.575 ======================================================== 00:30:23.575 Total : 22327.00 87.21 2877.52 457.66 16092.45 00:30:23.575 00:30:23.575 10:57:52 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:23.575 10:57:52 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ rdma == \r\d\m\a ]] 00:30:23.575 10:57:52 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:30:23.575 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.833 No valid NVMe controllers or AIO or URING devices found 00:30:23.833 Initializing NVMe Controllers 00:30:23.833 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.833 Controller IO queue size 128, less than required. 00:30:23.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.833 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:23.833 Controller IO queue size 128, less than required. 00:30:23.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.833 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:23.833 WARNING: Some requested NVMe devices were skipped 00:30:23.833 10:57:52 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:30:23.833 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.104 Initializing NVMe Controllers 00:30:29.104 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.104 Controller IO queue size 128, less than required. 00:30:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.104 Controller IO queue size 128, less than required. 00:30:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.104 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:29.104 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:29.104 Initialization complete. Launching workers. 00:30:29.104 00:30:29.104 ==================== 00:30:29.104 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:29.104 RDMA transport: 00:30:29.104 dev name: rocep175s0f0 00:30:29.104 polls: 355624 00:30:29.104 idle_polls: 350409 00:30:29.104 completions: 43082 00:30:29.104 queued_requests: 1 00:30:29.104 total_send_wrs: 21541 00:30:29.104 send_doorbell_updates: 4651 00:30:29.104 total_recv_wrs: 21668 00:30:29.104 recv_doorbell_updates: 4653 00:30:29.104 --------------------------------- 00:30:29.104 00:30:29.104 ==================== 00:30:29.104 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:29.104 RDMA transport: 00:30:29.104 dev name: rocep175s0f0 00:30:29.104 polls: 357161 00:30:29.104 idle_polls: 349171 00:30:29.104 completions: 51322 00:30:29.104 queued_requests: 1 00:30:29.104 total_send_wrs: 25661 00:30:29.104 send_doorbell_updates: 7008 00:30:29.104 total_recv_wrs: 25788 00:30:29.104 recv_doorbell_updates: 7010 00:30:29.104 --------------------------------- 00:30:29.104 ======================================================== 00:30:29.104 Latency(us) 00:30:29.104 Device Information : IOPS MiB/s Average min max 00:30:29.104 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5381.63 1345.41 23809.35 16641.29 53150.12 00:30:29.104 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6410.99 1602.75 19828.14 15033.78 42462.64 00:30:29.104 ======================================================== 00:30:29.104 Total : 11792.62 2948.15 21644.99 15033.78 53150.12 00:30:29.104 00:30:29.104 10:57:57 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:29.104 10:57:57 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.104 10:57:57 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:29.104 10:57:57 nvmf_rdma.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:29.104 10:57:57 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9d6c2fa7-2bfd-404f-9dbf-bd76e67a1030 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9d6c2fa7-2bfd-404f-9dbf-bd76e67a1030 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=9d6c2fa7-2bfd-404f-9dbf-bd76e67a1030 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:30:32.393 { 00:30:32.393 "uuid": "9d6c2fa7-2bfd-404f-9dbf-bd76e67a1030", 00:30:32.393 "name": "lvs_0", 00:30:32.393 "base_bdev": "Nvme0n1", 00:30:32.393 "total_data_clusters": 238234, 00:30:32.393 "free_clusters": 238234, 00:30:32.393 "block_size": 512, 00:30:32.393 "cluster_size": 4194304 00:30:32.393 } 00:30:32.393 ]' 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="9d6c2fa7-2bfd-404f-9dbf-bd76e67a1030") .free_clusters' 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=238234 00:30:32.393 10:58:00 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9d6c2fa7-2bfd-404f-9dbf-bd76e67a1030") .cluster_size' 00:30:32.393 10:58:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:30:32.393 10:58:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=952936 00:30:32.393 10:58:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 952936 00:30:32.393 952936 00:30:32.393 10:58:01 nvmf_rdma.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:32.393 10:58:01 nvmf_rdma.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:32.393 10:58:01 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d6c2fa7-2bfd-404f-9dbf-bd76e67a1030 lbd_0 20480 00:30:32.393 10:58:01 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4ff30547-c30d-4362-a358-2954f534a82b 00:30:32.393 10:58:01 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4ff30547-c30d-4362-a358-2954f534a82b lvs_n_0 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1e9dab8b-75f3-45cd-9668-4b61a8c3ef3b 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1e9dab8b-75f3-45cd-9668-4b61a8c3ef3b 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=1e9dab8b-75f3-45cd-9668-4b61a8c3ef3b 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:30:33.376 { 00:30:33.376 "uuid": "9d6c2fa7-2bfd-404f-9dbf-bd76e67a1030", 00:30:33.376 "name": "lvs_0", 00:30:33.376 "base_bdev": "Nvme0n1", 00:30:33.376 "total_data_clusters": 238234, 00:30:33.376 "free_clusters": 233114, 00:30:33.376 "block_size": 512, 00:30:33.376 "cluster_size": 4194304 00:30:33.376 }, 00:30:33.376 { 00:30:33.376 "uuid": "1e9dab8b-75f3-45cd-9668-4b61a8c3ef3b", 00:30:33.376 "name": "lvs_n_0", 00:30:33.376 "base_bdev": "4ff30547-c30d-4362-a358-2954f534a82b", 00:30:33.376 "total_data_clusters": 5114, 00:30:33.376 "free_clusters": 5114, 00:30:33.376 "block_size": 512, 00:30:33.376 "cluster_size": 4194304 00:30:33.376 } 00:30:33.376 ]' 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="1e9dab8b-75f3-45cd-9668-4b61a8c3ef3b") .free_clusters' 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=5114 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1e9dab8b-75f3-45cd-9668-4b61a8c3ef3b") .cluster_size' 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=20456 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 20456 00:30:33.376 20456 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:33.376 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1e9dab8b-75f3-45cd-9668-4b61a8c3ef3b lbd_nest_0 20456 00:30:33.634 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5990b9da-3580-435c-a1e1-f9fd12443499 00:30:33.634 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:33.893 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:33.893 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5990b9da-3580-435c-a1e1-f9fd12443499 00:30:33.893 10:58:02 nvmf_rdma.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:34.152 10:58:03 nvmf_rdma.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:34.152 10:58:03 nvmf_rdma.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:34.152 10:58:03 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:34.152 10:58:03 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:34.152 10:58:03 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:34.152 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.357 Initializing NVMe Controllers 00:30:46.357 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.357 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:46.357 Initialization complete. Launching workers. 00:30:46.357 ======================================================== 00:30:46.357 Latency(us) 00:30:46.357 Device Information : IOPS MiB/s Average min max 00:30:46.357 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5209.20 2.54 191.51 76.76 8051.78 00:30:46.357 ======================================================== 00:30:46.357 Total : 5209.20 2.54 191.51 76.76 8051.78 00:30:46.357 00:30:46.357 10:58:14 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:46.357 10:58:14 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:46.357 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.568 Initializing NVMe Controllers 00:30:58.568 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.568 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.568 Initialization complete. Launching workers. 00:30:58.568 ======================================================== 00:30:58.568 Latency(us) 00:30:58.568 Device Information : IOPS MiB/s Average min max 00:30:58.568 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 125.47 15.68 7969.90 4784.65 10973.04 00:30:58.568 ======================================================== 00:30:58.568 Total : 125.47 15.68 7969.90 4784.65 10973.04 00:30:58.568 00:30:58.568 10:58:25 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:58.568 10:58:25 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:58.568 10:58:25 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:58.568 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.543 Initializing NVMe Controllers 00:31:08.543 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.543 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.543 Initialization complete. Launching workers. 00:31:08.543 ======================================================== 00:31:08.543 Latency(us) 00:31:08.543 Device Information : IOPS MiB/s Average min max 00:31:08.543 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11247.30 5.49 2845.00 741.92 9891.70 00:31:08.543 ======================================================== 00:31:08.543 Total : 11247.30 5.49 2845.00 741.92 9891.70 00:31:08.543 00:31:08.543 10:58:36 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:08.543 10:58:36 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:08.543 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.808 Initializing NVMe Controllers 00:31:20.808 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.808 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.808 Initialization complete. Launching workers. 00:31:20.808 ======================================================== 00:31:20.808 Latency(us) 00:31:20.808 Device Information : IOPS MiB/s Average min max 00:31:20.808 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8940.92 1117.61 3578.39 508.26 10464.94 00:31:20.809 ======================================================== 00:31:20.809 Total : 8940.92 1117.61 3578.39 508.26 10464.94 00:31:20.809 00:31:20.809 10:58:48 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:20.809 10:58:48 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:20.809 10:58:48 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:20.809 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.783 Initializing NVMe Controllers 00:31:30.783 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:30.783 Controller IO queue size 128, less than required. 00:31:30.783 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:30.783 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:30.783 Initialization complete. Launching workers. 00:31:30.783 ======================================================== 00:31:30.783 Latency(us) 00:31:30.783 Device Information : IOPS MiB/s Average min max 00:31:30.783 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18744.17 9.15 6831.25 2086.28 15503.92 00:31:30.783 ======================================================== 00:31:30.783 Total : 18744.17 9.15 6831.25 2086.28 15503.92 00:31:30.783 00:31:30.783 10:58:59 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:30.783 10:58:59 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:30.783 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.983 Initializing NVMe Controllers 00:31:42.983 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.984 Controller IO queue size 128, less than required. 00:31:42.984 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.984 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:42.984 Initialization complete. Launching workers. 00:31:42.984 ======================================================== 00:31:42.984 Latency(us) 00:31:42.984 Device Information : IOPS MiB/s Average min max 00:31:42.984 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6221.13 777.64 20592.84 7876.10 39775.17 00:31:42.984 ======================================================== 00:31:42.984 Total : 6221.13 777.64 20592.84 7876.10 39775.17 00:31:42.984 00:31:42.984 10:59:11 nvmf_rdma.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.984 10:59:11 nvmf_rdma.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5990b9da-3580-435c-a1e1-f9fd12443499 00:31:42.984 10:59:11 nvmf_rdma.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:43.241 10:59:12 nvmf_rdma.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ff30547-c30d-4362-a358-2954f534a82b 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:43.497 rmmod nvme_rdma 00:31:43.497 rmmod nvme_fabrics 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 42032 ']' 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 42032 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 42032 ']' 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 42032 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:43.497 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 42032 00:31:43.754 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:43.754 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:43.754 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 42032' 00:31:43.754 killing process with pid 42032 00:31:43.754 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@968 -- # kill 42032 00:31:43.754 10:59:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@973 -- # wait 42032 00:31:45.130 10:59:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:45.130 10:59:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:45.130 00:31:45.130 real 1m40.942s 00:31:45.130 user 6m22.481s 00:31:45.130 sys 0m6.131s 00:31:45.130 10:59:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:45.130 10:59:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:45.130 ************************************ 00:31:45.130 END TEST nvmf_perf 00:31:45.130 ************************************ 00:31:45.130 10:59:14 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:31:45.130 10:59:14 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:45.130 10:59:14 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:45.130 10:59:14 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:45.130 ************************************ 00:31:45.130 START TEST nvmf_fio_host 00:31:45.130 ************************************ 00:31:45.130 10:59:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:31:45.390 * Looking for test storage... 00:31:45.390 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.390 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:45.391 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:45.391 10:59:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:45.391 10:59:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:51.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:51.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@377 -- # modinfo irdma 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:51.955 Found net devices under 0000:af:00.0: cvl_0_0 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:51.955 Found net devices under 0000:af:00.1: cvl_0_1 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo cvl_0_0 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo cvl_0_1 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:51.955 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:31:51.956 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:31:51.956 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:31:51.956 altname enp175s0f0np0 00:31:51.956 altname ens801f0np0 00:31:51.956 inet 192.168.100.8/24 scope global cvl_0_0 00:31:51.956 valid_lft forever preferred_lft forever 00:31:51.956 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:31:51.956 valid_lft forever preferred_lft forever 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:31:51.956 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:31:51.956 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:31:51.956 altname enp175s0f1np1 00:31:51.956 altname ens801f1np1 00:31:51.956 inet 192.168.100.9/24 scope global cvl_0_1 00:31:51.956 valid_lft forever preferred_lft forever 00:31:51.956 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:31:51.956 valid_lft forever preferred_lft forever 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo cvl_0_0 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo cvl_0_1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:51.956 192.168.100.9' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:51.956 192.168.100.9' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:51.956 192.168.100.9' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:51.956 10:59:19 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=60536 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 60536 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 60536 ']' 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.956 [2024-06-10 10:59:20.067297] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:31:51.956 [2024-06-10 10:59:20.067347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.956 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.956 [2024-06-10 10:59:20.127488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:51.956 [2024-06-10 10:59:20.205607] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.956 [2024-06-10 10:59:20.205644] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.956 [2024-06-10 10:59:20.205651] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.956 [2024-06-10 10:59:20.205657] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.956 [2024-06-10 10:59:20.205662] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.956 [2024-06-10 10:59:20.205706] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.956 [2024-06-10 10:59:20.205804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.956 [2024-06-10 10:59:20.205867] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:51.956 [2024-06-10 10:59:20.205868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:31:51.956 10:59:20 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:52.215 [2024-06-10 10:59:21.048590] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x8418f0/0x840f30) succeed. 00:31:52.215 [2024-06-10 10:59:21.057568] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x842ca0/0x8414b0) succeed. 00:31:52.215 [2024-06-10 10:59:21.057589] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:31:52.215 10:59:21 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:52.215 10:59:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:52.215 10:59:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.215 10:59:21 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:52.474 Malloc1 00:31:52.474 10:59:21 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:52.474 10:59:21 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:52.731 10:59:21 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:52.990 [2024-06-10 10:59:21.818907] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:52.990 10:59:21 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:53.267 10:59:22 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:53.529 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:53.529 fio-3.35 00:31:53.529 Starting 1 thread 00:31:53.529 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.079 00:31:56.079 test: (groupid=0, jobs=1): err= 0: pid=61025: Mon Jun 10 10:59:24 2024 00:31:56.079 read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2003msec) 00:31:56.079 slat (nsec): min=1400, max=23965, avg=1545.14, stdev=475.19 00:31:56.079 clat (usec): min=2069, max=6440, avg=3552.44, stdev=82.20 00:31:56.079 lat (usec): min=2089, max=6441, avg=3553.99, stdev=82.13 00:31:56.079 clat percentiles (usec): 00:31:56.079 | 1.00th=[ 3523], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3523], 00:31:56.079 | 30.00th=[ 3556], 40.00th=[ 3556], 50.00th=[ 3556], 60.00th=[ 3556], 00:31:56.079 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3556], 95.00th=[ 3589], 00:31:56.079 | 99.00th=[ 3654], 99.50th=[ 3720], 99.90th=[ 4621], 99.95th=[ 5538], 00:31:56.079 | 99.99th=[ 6390] 00:31:56.079 bw ( KiB/s): min=70048, max=72376, per=100.00%, avg=71554.00, stdev=1029.37, samples=4 00:31:56.079 iops : min=17512, max=18094, avg=17888.50, stdev=257.34, samples=4 00:31:56.079 write: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2003msec); 0 zone resets 00:31:56.079 slat (nsec): min=1449, max=17557, avg=1637.83, stdev=441.74 00:31:56.079 clat (usec): min=2100, max=6446, avg=3550.90, stdev=83.18 00:31:56.079 lat (usec): min=2110, max=6448, avg=3552.53, stdev=83.10 00:31:56.079 clat percentiles (usec): 00:31:56.079 | 1.00th=[ 3523], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3523], 00:31:56.079 | 30.00th=[ 3556], 40.00th=[ 3556], 50.00th=[ 3556], 60.00th=[ 3556], 00:31:56.079 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3556], 95.00th=[ 3589], 00:31:56.079 | 99.00th=[ 3654], 99.50th=[ 3687], 99.90th=[ 4686], 99.95th=[ 5538], 00:31:56.079 | 99.99th=[ 6390] 00:31:56.079 bw ( KiB/s): min=69968, max=72208, per=99.95%, avg=71548.00, stdev=1063.36, samples=4 00:31:56.079 iops : min=17492, max=18052, avg=17887.00, stdev=265.84, samples=4 00:31:56.079 lat (msec) : 4=99.87%, 10=0.13% 00:31:56.079 cpu : usr=99.50%, sys=0.15%, ctx=8, majf=0, minf=2 00:31:56.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:56.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:56.079 issued rwts: total=35832,35846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:56.079 00:31:56.079 Run status group 0 (all jobs): 00:31:56.079 READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2003-2003msec 00:31:56.079 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2003-2003msec 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:56.079 10:59:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:31:56.079 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:56.079 fio-3.35 00:31:56.079 Starting 1 thread 00:31:56.079 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.601 00:31:58.601 test: (groupid=0, jobs=1): err= 0: pid=61488: Mon Jun 10 10:59:27 2024 00:31:58.601 read: IOPS=13.4k, BW=209MiB/s (219MB/s)(413MiB/1978msec) 00:31:58.601 slat (nsec): min=2318, max=29154, avg=2691.79, stdev=934.89 00:31:58.601 clat (usec): min=422, max=7317, avg=2193.55, stdev=1077.14 00:31:58.601 lat (usec): min=425, max=7320, avg=2196.24, stdev=1077.45 00:31:58.601 clat percentiles (usec): 00:31:58.601 | 1.00th=[ 914], 5.00th=[ 1172], 10.00th=[ 1303], 20.00th=[ 1467], 00:31:58.601 | 30.00th=[ 1582], 40.00th=[ 1713], 50.00th=[ 1860], 60.00th=[ 2040], 00:31:58.601 | 70.00th=[ 2278], 80.00th=[ 2671], 90.00th=[ 3621], 95.00th=[ 4752], 00:31:58.601 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7046], 99.95th=[ 7111], 00:31:58.601 | 99.99th=[ 7242] 00:31:58.601 bw ( KiB/s): min=102656, max=106464, per=48.91%, avg=104640.00, stdev=1596.80, samples=4 00:31:58.601 iops : min= 6416, max= 6654, avg=6540.00, stdev=99.80, samples=4 00:31:58.601 write: IOPS=7430, BW=116MiB/s (122MB/s)(212MiB/1829msec); 0 zone resets 00:31:58.601 slat (usec): min=27, max=134, avg=29.98, stdev= 4.54 00:31:58.601 clat (usec): min=3688, max=20203, avg=12687.60, stdev=1405.53 00:31:58.601 lat (usec): min=3715, max=20230, avg=12717.58, stdev=1405.34 00:31:58.601 clat percentiles (usec): 00:31:58.601 | 1.00th=[ 7504], 5.00th=[10945], 10.00th=[11338], 20.00th=[11731], 00:31:58.601 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:31:58.601 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14353], 95.00th=[14877], 00:31:58.601 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17695], 99.95th=[19792], 00:31:58.601 | 99.99th=[20055] 00:31:58.601 bw ( KiB/s): min=105760, max=111008, per=90.79%, avg=107944.00, stdev=2293.07, samples=4 00:31:58.601 iops : min= 6610, max= 6938, avg=6746.50, stdev=143.32, samples=4 00:31:58.601 lat (usec) : 500=0.02%, 750=0.19%, 1000=0.97% 00:31:58.601 lat (msec) : 2=37.25%, 4=21.86%, 10=6.40%, 20=33.29%, 50=0.01% 00:31:58.601 cpu : usr=97.26%, sys=2.20%, ctx=89, majf=0, minf=1 00:31:58.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:31:58.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:58.601 issued rwts: total=26451,13591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:58.601 00:31:58.601 Run status group 0 (all jobs): 00:31:58.601 READ: bw=209MiB/s (219MB/s), 209MiB/s-209MiB/s (219MB/s-219MB/s), io=413MiB (433MB), run=1978-1978msec 00:31:58.601 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=212MiB (223MB), run=1829-1829msec 00:31:58.601 10:59:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=() 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1512 -- # local bdfs 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 00:31:58.602 10:59:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 192.168.100.8 00:32:01.877 Nvme0n1 00:32:01.877 10:59:30 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:04.398 10:59:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=7301b122-4a32-4458-83f3-aede54e1d017 00:32:04.398 10:59:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 7301b122-4a32-4458-83f3-aede54e1d017 00:32:04.398 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=7301b122-4a32-4458-83f3-aede54e1d017 00:32:04.399 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:32:04.399 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:32:04.399 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:32:04.399 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:04.656 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:32:04.656 { 00:32:04.656 "uuid": "7301b122-4a32-4458-83f3-aede54e1d017", 00:32:04.656 "name": "lvs_0", 00:32:04.656 "base_bdev": "Nvme0n1", 00:32:04.656 "total_data_clusters": 930, 00:32:04.656 "free_clusters": 930, 00:32:04.656 "block_size": 512, 00:32:04.656 "cluster_size": 1073741824 00:32:04.656 } 00:32:04.656 ]' 00:32:04.656 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="7301b122-4a32-4458-83f3-aede54e1d017") .free_clusters' 00:32:04.656 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=930 00:32:04.656 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="7301b122-4a32-4458-83f3-aede54e1d017") .cluster_size' 00:32:04.656 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=1073741824 00:32:04.656 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=952320 00:32:04.656 10:59:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 952320 00:32:04.656 952320 00:32:04.656 10:59:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:05.220 d9af6add-5946-443a-999c-1923cb8df84d 00:32:05.220 10:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:05.220 10:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:05.477 10:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:05.735 10:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:05.993 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:05.993 fio-3.35 00:32:05.993 Starting 1 thread 00:32:05.993 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.518 00:32:08.518 test: (groupid=0, jobs=1): err= 0: pid=63203: Mon Jun 10 10:59:37 2024 00:32:08.518 read: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(83.3MiB/2005msec) 00:32:08.518 slat (nsec): min=1397, max=17114, avg=1513.49, stdev=259.34 00:32:08.518 clat (usec): min=336, max=169056, avg=6012.26, stdev=9075.33 00:32:08.518 lat (usec): min=338, max=169070, avg=6013.78, stdev=9075.37 00:32:08.518 clat percentiles (msec): 00:32:08.518 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:08.518 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:32:08.518 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:32:08.518 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 169], 99.95th=[ 169], 00:32:08.518 | 99.99th=[ 169] 00:32:08.518 bw ( KiB/s): min=30376, max=47032, per=99.95%, avg=42528.00, stdev=8107.89, samples=4 00:32:08.518 iops : min= 7594, max=11758, avg=10632.00, stdev=2026.97, samples=4 00:32:08.518 write: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(83.2MiB/2005msec); 0 zone resets 00:32:08.518 slat (nsec): min=1449, max=17886, avg=1624.46, stdev=355.10 00:32:08.518 clat (usec): min=139, max=169359, avg=5960.59, stdev=8469.74 00:32:08.518 lat (usec): min=140, max=169363, avg=5962.21, stdev=8469.79 00:32:08.518 clat percentiles (msec): 00:32:08.518 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:08.518 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:32:08.518 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:32:08.518 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 169], 99.95th=[ 169], 00:32:08.518 | 99.99th=[ 169] 00:32:08.518 bw ( KiB/s): min=31320, max=46504, per=99.95%, avg=42498.00, stdev=7456.21, samples=4 00:32:08.518 iops : min= 7830, max=11626, avg=10624.50, stdev=1864.05, samples=4 00:32:08.518 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:08.518 lat (msec) : 2=0.05%, 4=0.22%, 10=99.40%, 250=0.30% 00:32:08.518 cpu : usr=99.60%, sys=0.10%, ctx=8, majf=0, minf=2 00:32:08.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:08.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:08.518 issued rwts: total=21328,21312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:08.518 00:32:08.518 Run status group 0 (all jobs): 00:32:08.518 READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=83.3MiB (87.4MB), run=2005-2005msec 00:32:08.518 WRITE: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=83.2MiB (87.3MB), run=2005-2005msec 00:32:08.518 10:59:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:08.518 10:59:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:09.451 10:59:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9ab21a98-3df8-4bc9-be22-08aa5a09ba96 00:32:09.451 10:59:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9ab21a98-3df8-4bc9-be22-08aa5a09ba96 00:32:09.451 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=9ab21a98-3df8-4bc9-be22-08aa5a09ba96 00:32:09.451 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:32:09.451 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:32:09.451 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:32:09.451 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:09.709 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:32:09.709 { 00:32:09.709 "uuid": "7301b122-4a32-4458-83f3-aede54e1d017", 00:32:09.709 "name": "lvs_0", 00:32:09.709 "base_bdev": "Nvme0n1", 00:32:09.709 "total_data_clusters": 930, 00:32:09.709 "free_clusters": 0, 00:32:09.709 "block_size": 512, 00:32:09.709 "cluster_size": 1073741824 00:32:09.709 }, 00:32:09.709 { 00:32:09.709 "uuid": "9ab21a98-3df8-4bc9-be22-08aa5a09ba96", 00:32:09.709 "name": "lvs_n_0", 00:32:09.709 "base_bdev": "d9af6add-5946-443a-999c-1923cb8df84d", 00:32:09.709 "total_data_clusters": 237847, 00:32:09.709 "free_clusters": 237847, 00:32:09.709 "block_size": 512, 00:32:09.709 "cluster_size": 4194304 00:32:09.709 } 00:32:09.709 ]' 00:32:09.709 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="9ab21a98-3df8-4bc9-be22-08aa5a09ba96") .free_clusters' 00:32:09.709 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=237847 00:32:09.709 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9ab21a98-3df8-4bc9-be22-08aa5a09ba96") .cluster_size' 00:32:09.709 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=4194304 00:32:09.709 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=951388 00:32:09.709 10:59:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 951388 00:32:09.709 951388 00:32:09.709 10:59:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:10.274 c1fe7de2-2b05-40b4-b05d-53acdb4bb6be 00:32:10.274 10:59:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:10.532 10:59:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:10.789 10:59:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:32:10.789 10:59:39 nvmf_rdma.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:32:10.790 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:11.063 10:59:39 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:11.321 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:11.321 fio-3.35 00:32:11.321 Starting 1 thread 00:32:11.321 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.855 00:32:13.855 test: (groupid=0, jobs=1): err= 0: pid=64231: Mon Jun 10 10:59:42 2024 00:32:13.855 read: IOPS=9793, BW=38.3MiB/s (40.1MB/s)(76.7MiB/2006msec) 00:32:13.855 slat (nsec): min=1413, max=28839, avg=1577.55, stdev=555.19 00:32:13.855 clat (usec): min=3504, max=11485, avg=6476.18, stdev=199.59 00:32:13.855 lat (usec): min=3526, max=11487, avg=6477.76, stdev=199.52 00:32:13.855 clat percentiles (usec): 00:32:13.855 | 1.00th=[ 6325], 5.00th=[ 6390], 10.00th=[ 6456], 20.00th=[ 6456], 00:32:13.855 | 30.00th=[ 6456], 40.00th=[ 6456], 50.00th=[ 6456], 60.00th=[ 6456], 00:32:13.855 | 70.00th=[ 6456], 80.00th=[ 6521], 90.00th=[ 6521], 95.00th=[ 6521], 00:32:13.855 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 9896], 99.95th=[10683], 00:32:13.855 | 99.99th=[11469] 00:32:13.855 bw ( KiB/s): min=38075, max=39784, per=99.94%, avg=39150.75, stdev=812.93, samples=4 00:32:13.855 iops : min= 9518, max= 9946, avg=9787.50, stdev=203.56, samples=4 00:32:13.855 write: IOPS=9808, BW=38.3MiB/s (40.2MB/s)(76.9MiB/2006msec); 0 zone resets 00:32:13.855 slat (nsec): min=1460, max=18649, avg=1676.01, stdev=601.02 00:32:13.855 clat (usec): min=4958, max=11465, avg=6496.33, stdev=187.93 00:32:13.855 lat (usec): min=4968, max=11466, avg=6498.00, stdev=187.87 00:32:13.855 clat percentiles (usec): 00:32:13.855 | 1.00th=[ 6390], 5.00th=[ 6456], 10.00th=[ 6456], 20.00th=[ 6456], 00:32:13.855 | 30.00th=[ 6456], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6521], 00:32:13.855 | 70.00th=[ 6521], 80.00th=[ 6521], 90.00th=[ 6521], 95.00th=[ 6521], 00:32:13.855 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 9765], 99.95th=[10552], 00:32:13.855 | 99.99th=[11469] 00:32:13.855 bw ( KiB/s): min=38522, max=39768, per=99.91%, avg=39200.50, stdev=516.06, samples=4 00:32:13.855 iops : min= 9630, max= 9942, avg=9800.00, stdev=129.23, samples=4 00:32:13.855 lat (msec) : 4=0.01%, 10=99.92%, 20=0.07% 00:32:13.855 cpu : usr=99.50%, sys=0.15%, ctx=9, majf=0, minf=2 00:32:13.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:13.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.855 issued rwts: total=19646,19676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.855 00:32:13.855 Run status group 0 (all jobs): 00:32:13.855 READ: bw=38.3MiB/s (40.1MB/s), 38.3MiB/s-38.3MiB/s (40.1MB/s-40.1MB/s), io=76.7MiB (80.5MB), run=2006-2006msec 00:32:13.855 WRITE: bw=38.3MiB/s (40.2MB/s), 38.3MiB/s-38.3MiB/s (40.2MB/s-40.2MB/s), io=76.9MiB (80.6MB), run=2006-2006msec 00:32:13.855 10:59:42 nvmf_rdma.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:13.855 10:59:42 nvmf_rdma.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:13.855 10:59:42 nvmf_rdma.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:18.093 10:59:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:18.093 10:59:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:20.625 10:59:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:20.625 10:59:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:22.529 rmmod nvme_rdma 00:32:22.529 rmmod nvme_fabrics 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 60536 ']' 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 60536 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 60536 ']' 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 60536 00:32:22.529 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:32:22.530 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:22.530 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 60536 00:32:22.530 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:22.530 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:22.530 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60536' 00:32:22.530 killing process with pid 60536 00:32:22.530 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 60536 00:32:22.530 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 60536 00:32:22.789 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:22.789 10:59:51 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:22.789 00:32:22.789 real 0m37.504s 00:32:22.789 user 2m41.064s 00:32:22.789 sys 0m6.344s 00:32:22.789 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:22.789 10:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.789 ************************************ 00:32:22.789 END TEST nvmf_fio_host 00:32:22.789 ************************************ 00:32:22.789 10:59:51 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:32:22.789 10:59:51 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:22.789 10:59:51 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:22.789 10:59:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:22.789 ************************************ 00:32:22.789 START TEST nvmf_failover 00:32:22.789 ************************************ 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:32:22.789 * Looking for test storage... 00:32:22.789 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:32:22.789 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:22.790 10:59:51 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.048 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:23.048 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:23.048 10:59:51 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.048 10:59:51 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:28.319 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:28.319 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@377 -- # modinfo irdma 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:28.319 Found net devices under 0000:af:00.0: cvl_0_0 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.319 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:28.320 Found net devices under 0000:af:00.1: cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo cvl_0_0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:32:28.320 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:28.320 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:32:28.320 altname enp175s0f0np0 00:32:28.320 altname ens801f0np0 00:32:28.320 inet 192.168.100.8/24 scope global cvl_0_0 00:32:28.320 valid_lft forever preferred_lft forever 00:32:28.320 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:32:28.320 valid_lft forever preferred_lft forever 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:32:28.320 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:32:28.320 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:32:28.320 altname enp175s0f1np1 00:32:28.320 altname ens801f1np1 00:32:28.320 inet 192.168.100.9/24 scope global cvl_0_1 00:32:28.320 valid_lft forever preferred_lft forever 00:32:28.320 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:32:28.320 valid_lft forever preferred_lft forever 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo cvl_0_0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:28.320 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:28.580 192.168.100.9' 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:28.580 192.168.100.9' 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:28.580 192.168.100.9' 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=69347 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 69347 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 69347 ']' 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:28.580 10:59:57 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:28.580 [2024-06-10 10:59:57.442407] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:28.580 [2024-06-10 10:59:57.442456] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.580 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.580 [2024-06-10 10:59:57.502431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:28.580 [2024-06-10 10:59:57.580816] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.580 [2024-06-10 10:59:57.580850] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.580 [2024-06-10 10:59:57.580857] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.580 [2024-06-10 10:59:57.580863] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.580 [2024-06-10 10:59:57.580868] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.580 [2024-06-10 10:59:57.580985] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:28.580 [2024-06-10 10:59:57.581071] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.580 [2024-06-10 10:59:57.581072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.518 10:59:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:29.518 10:59:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:32:29.518 10:59:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:29.518 10:59:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:29.518 10:59:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.518 10:59:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.518 10:59:58 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:29.518 [2024-06-10 10:59:58.450031] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x252e0d0/0x252d710) succeed. 00:32:29.518 [2024-06-10 10:59:58.458758] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x252f400/0x252dc90) succeed. 00:32:29.518 [2024-06-10 10:59:58.458783] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:32:29.518 10:59:58 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:29.777 Malloc0 00:32:29.777 10:59:58 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:30.036 10:59:58 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:30.036 10:59:59 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:30.295 [2024-06-10 10:59:59.179714] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:30.295 10:59:59 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:30.554 [2024-06-10 10:59:59.348278] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:32:30.554 [2024-06-10 10:59:59.524925] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=69633 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 69633 /var/tmp/bdevperf.sock 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 69633 ']' 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:30.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:30.554 10:59:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.491 11:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:31.491 11:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:32:31.491 11:00:00 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:31.749 NVMe0n1 00:32:31.749 11:00:00 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:32.008 00:32:32.008 11:00:00 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:32.008 11:00:00 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=69902 00:32:32.008 11:00:00 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:32.943 11:00:01 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:33.205 11:00:02 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:36.485 11:00:05 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:36.485 00:32:36.485 11:00:05 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:36.741 11:00:05 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:40.071 11:00:08 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:40.071 [2024-06-10 11:00:08.719103] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:40.071 11:00:08 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:41.005 11:00:09 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:32:41.005 11:00:09 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 69902 00:32:47.579 0 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 69633 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 69633 ']' 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 69633 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 69633 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 69633' 00:32:47.579 killing process with pid 69633 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 69633 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 69633 00:32:47.579 11:00:16 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:47.579 [2024-06-10 10:59:59.593554] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:47.579 [2024-06-10 10:59:59.593608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69633 ] 00:32:47.579 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.579 [2024-06-10 10:59:59.654780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.579 [2024-06-10 10:59:59.727834] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.579 Running I/O for 15 seconds... 00:32:47.579 [2024-06-10 11:00:02.624975] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:47.579 [2024-06-10 11:00:02.625023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.579 [2024-06-10 11:00:02.625239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0xf9cc8968 00:32:47.579 [2024-06-10 11:00:02.625245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0xf9cc8968 00:32:47.580 [2024-06-10 11:00:02.625760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.580 [2024-06-10 11:00:02.625774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.580 [2024-06-10 11:00:02.625782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.580 [2024-06-10 11:00:02.625789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.625985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.625993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.581 [2024-06-10 11:00:02.626362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.581 [2024-06-10 11:00:02.626369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.626928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.582 [2024-06-10 11:00:02.626935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:4980 p:0 m:0 dnr:0 00:32:47.582 [2024-06-10 11:00:02.627144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.582 [2024-06-10 11:00:02.627155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.582 [2024-06-10 11:00:02.627161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27256 len:8 PRP1 0x0 PRP2 0x0 00:32:47.583 [2024-06-10 11:00:02.627169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:02.627208] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:32:47.583 [2024-06-10 11:00:02.627217] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:32:47.583 [2024-06-10 11:00:02.627224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.583 [2024-06-10 11:00:02.629998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.583 [2024-06-10 11:00:02.630036] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:47.583 [2024-06-10 11:00:02.643368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:47.583 [2024-06-10 11:00:02.686926] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:47.583 [2024-06-10 11:00:06.081004] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:47.583 [2024-06-10 11:00:06.081053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.583 [2024-06-10 11:00:06.081417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.583 [2024-06-10 11:00:06.081459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x87ca392c 00:32:47.583 [2024-06-10 11:00:06.081465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x87ca392c 00:32:47.584 [2024-06-10 11:00:06.081813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.584 [2024-06-10 11:00:06.081977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.584 [2024-06-10 11:00:06.081985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.081992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x87ca392c 00:32:47.585 [2024-06-10 11:00:06.082311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x87ca392c 00:32:47.585 [2024-06-10 11:00:06.082325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x87ca392c 00:32:47.585 [2024-06-10 11:00:06.082341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x87ca392c 00:32:47.585 [2024-06-10 11:00:06.082356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x87ca392c 00:32:47.585 [2024-06-10 11:00:06.082371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x87ca392c 00:32:47.585 [2024-06-10 11:00:06.082386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.585 [2024-06-10 11:00:06.082558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.585 [2024-06-10 11:00:06.082565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x87ca392c 00:32:47.586 [2024-06-10 11:00:06.082880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x87ca392c 00:32:47.586 [2024-06-10 11:00:06.082895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.082982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:06.082988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4fb100 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.083245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.586 [2024-06-10 11:00:06.083255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.586 [2024-06-10 11:00:06.083262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55912 len:8 PRP1 0x0 PRP2 0x0 00:32:47.586 [2024-06-10 11:00:06.083271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:06.083306] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:32:47.586 [2024-06-10 11:00:06.083315] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:32:47.586 [2024-06-10 11:00:06.083322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.586 [2024-06-10 11:00:06.086099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.586 [2024-06-10 11:00:06.086135] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:47.586 [2024-06-10 11:00:06.099311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:47.586 [2024-06-10 11:00:06.143597] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:47.586 [2024-06-10 11:00:10.496979] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:47.586 [2024-06-10 11:00:10.497019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0xfebbb2a4 00:32:47.586 [2024-06-10 11:00:10.497029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:10.497046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0xfebbb2a4 00:32:47.586 [2024-06-10 11:00:10.497054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:10.497063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0xfebbb2a4 00:32:47.586 [2024-06-10 11:00:10.497070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:10.497078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0xfebbb2a4 00:32:47.586 [2024-06-10 11:00:10.497085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:10.497093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.586 [2024-06-10 11:00:10.497100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.586 [2024-06-10 11:00:10.497108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.587 [2024-06-10 11:00:10.497590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.587 [2024-06-10 11:00:10.497672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0xfebbb2a4 00:32:47.587 [2024-06-10 11:00:10.497679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0xfebbb2a4 00:32:47.588 [2024-06-10 11:00:10.497696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0xfebbb2a4 00:32:47.588 [2024-06-10 11:00:10.497711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.588 [2024-06-10 11:00:10.497725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.588 [2024-06-10 11:00:10.497740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.588 [2024-06-10 11:00:10.497754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.588 [2024-06-10 11:00:10.497769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.588 [2024-06-10 11:00:10.497783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.588 [2024-06-10 11:00:10.497797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.588 [2024-06-10 11:00:10.497812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.588 [2024-06-10 11:00:10.497827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0xfebbb2a4 00:32:47.588 [2024-06-10 11:00:10.497842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0xfebbb2a4 00:32:47.588 [2024-06-10 11:00:10.497857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0xfebbb2a4 00:32:47.588 [2024-06-10 11:00:10.497875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0xfebbb2a4 00:32:47.588 [2024-06-10 11:00:10.497890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0xfebbb2a4 00:32:47.588 [2024-06-10 11:00:10.497905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.588 [2024-06-10 11:00:10.497913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0xfebbb2a4 00:32:47.588 [2024-06-10 11:00:10.497920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.497929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.497936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.497944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.497951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.497968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.497975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.497983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.497990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.497998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.589 [2024-06-10 11:00:10.498113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.589 [2024-06-10 11:00:10.498128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.589 [2024-06-10 11:00:10.498143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.589 [2024-06-10 11:00:10.498158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.589 [2024-06-10 11:00:10.498173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.589 [2024-06-10 11:00:10.498188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.589 [2024-06-10 11:00:10.498203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.589 [2024-06-10 11:00:10.498217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.589 [2024-06-10 11:00:10.498273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0xfebbb2a4 00:32:47.589 [2024-06-10 11:00:10.498280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0xfebbb2a4 00:32:47.590 [2024-06-10 11:00:10.498671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.590 [2024-06-10 11:00:10.498846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.590 [2024-06-10 11:00:10.498852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.498860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.591 [2024-06-10 11:00:10.498867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.498875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.591 [2024-06-10 11:00:10.498882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.498891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.591 [2024-06-10 11:00:10.498897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.498905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.591 [2024-06-10 11:00:10.498911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.498920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0xfebbb2a4 00:32:47.591 [2024-06-10 11:00:10.498927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.498935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0xfebbb2a4 00:32:47.591 [2024-06-10 11:00:10.498942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.498951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.591 [2024-06-10 11:00:10.498962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.498970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.591 [2024-06-10 11:00:10.498977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:4bdb70 sqhd:48c0 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.499276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.591 [2024-06-10 11:00:10.499286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.591 [2024-06-10 11:00:10.499293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:32:47.591 [2024-06-10 11:00:10.499300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.499335] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:32:47.591 [2024-06-10 11:00:10.499344] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:32:47.591 [2024-06-10 11:00:10.499352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.591 [2024-06-10 11:00:10.499372] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:32:47.591 [2024-06-10 11:00:10.499381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.591 [2024-06-10 11:00:10.499388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:4fb100 sqhd:d040 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.499396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.591 [2024-06-10 11:00:10.499403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:4fb100 sqhd:d040 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.499410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.591 [2024-06-10 11:00:10.499417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:4fb100 sqhd:d040 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.499424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.591 [2024-06-10 11:00:10.499431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:4fb100 sqhd:d040 p:0 m:0 dnr:0 00:32:47.591 [2024-06-10 11:00:10.512358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:47.591 [2024-06-10 11:00:10.512381] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:47.591 [2024-06-10 11:00:10.512388] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:47.591 [2024-06-10 11:00:10.515182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.591 [2024-06-10 11:00:10.566315] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:47.591 00:32:47.591 Latency(us) 00:32:47.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.591 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.591 Verification LBA range: start 0x0 length 0x4000 00:32:47.591 NVMe0n1 : 15.01 16044.98 62.68 302.62 0.00 7807.99 450.56 591197.14 00:32:47.591 =================================================================================================================== 00:32:47.591 Total : 16044.98 62.68 302.62 0.00 7807.99 450.56 591197.14 00:32:47.591 Received shutdown signal, test time was about 15.000000 seconds 00:32:47.591 00:32:47.591 Latency(us) 00:32:47.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.591 =================================================================================================================== 00:32:47.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=72842 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 72842 /var/tmp/bdevperf.sock 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 72842 ']' 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:47.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:47.591 11:00:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:48.156 11:00:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:48.156 11:00:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:32:48.156 11:00:17 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:48.414 [2024-06-10 11:00:17.284482] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:32:48.414 11:00:17 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:32:48.672 [2024-06-10 11:00:17.457070] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:32:48.672 11:00:17 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.931 NVMe0n1 00:32:48.931 11:00:17 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.931 00:32:48.931 11:00:17 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:49.189 00:32:49.189 11:00:18 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:49.189 11:00:18 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:49.456 11:00:18 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:49.723 11:00:18 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:53.006 11:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:53.006 11:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:53.006 11:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=73749 00:32:53.006 11:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:53.006 11:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 73749 00:32:53.945 0 00:32:53.945 11:00:22 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:53.945 [2024-06-10 11:00:16.337107] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:32:53.945 [2024-06-10 11:00:16.337157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72842 ] 00:32:53.945 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.945 [2024-06-10 11:00:16.396389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.945 [2024-06-10 11:00:16.463538] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.945 [2024-06-10 11:00:18.533638] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:32:53.945 [2024-06-10 11:00:18.534828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.945 [2024-06-10 11:00:18.534858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.945 [2024-06-10 11:00:18.553942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:53.945 [2024-06-10 11:00:18.571746] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:53.945 Running I/O for 1 seconds... 00:32:53.945 00:32:53.945 Latency(us) 00:32:53.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.945 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:53.945 Verification LBA range: start 0x0 length 0x4000 00:32:53.945 NVMe0n1 : 1.01 18167.48 70.97 0.00 0.00 7004.87 2449.80 15291.73 00:32:53.945 =================================================================================================================== 00:32:53.945 Total : 18167.48 70.97 0.00 0.00 7004.87 2449.80 15291.73 00:32:53.945 11:00:22 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:53.945 11:00:22 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:54.203 11:00:23 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.203 11:00:23 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:54.203 11:00:23 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:54.461 11:00:23 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.719 11:00:23 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 72842 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 72842 ']' 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 72842 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 72842 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 72842' 00:32:57.998 killing process with pid 72842 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 72842 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 72842 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:57.998 11:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:58.256 rmmod nvme_rdma 00:32:58.256 rmmod nvme_fabrics 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 69347 ']' 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 69347 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 69347 ']' 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 69347 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 69347 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 69347' 00:32:58.256 killing process with pid 69347 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 69347 00:32:58.256 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 69347 00:32:58.514 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:58.514 11:00:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:58.514 00:32:58.514 real 0m35.775s 00:32:58.514 user 2m2.780s 00:32:58.514 sys 0m6.026s 00:32:58.514 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:58.514 11:00:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:58.514 ************************************ 00:32:58.514 END TEST nvmf_failover 00:32:58.514 ************************************ 00:32:58.514 11:00:27 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:32:58.514 11:00:27 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:58.514 11:00:27 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:58.514 11:00:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:58.514 ************************************ 00:32:58.514 START TEST nvmf_host_discovery 00:32:58.514 ************************************ 00:32:58.514 11:00:27 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:32:58.773 * Looking for test storage... 00:32:58.773 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.773 11:00:27 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:32:58.774 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:32:58.774 00:32:58.774 real 0m0.100s 00:32:58.774 user 0m0.042s 00:32:58.774 sys 0m0.064s 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.774 ************************************ 00:32:58.774 END TEST nvmf_host_discovery 00:32:58.774 ************************************ 00:32:58.774 11:00:27 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:32:58.774 11:00:27 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:58.774 11:00:27 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:58.774 11:00:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:58.774 ************************************ 00:32:58.774 START TEST nvmf_host_multipath_status 00:32:58.774 ************************************ 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:32:58.774 * Looking for test storage... 00:32:58.774 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.774 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/bpftrace.sh 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:59.033 11:00:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:33:05.629 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:05.630 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:05.630 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # modinfo irdma 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:05.630 Found net devices under 0000:af:00.0: cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:05.630 Found net devices under 0000:af:00.1: cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:33:05.630 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:05.630 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:33:05.630 altname enp175s0f0np0 00:33:05.630 altname ens801f0np0 00:33:05.630 inet 192.168.100.8/24 scope global cvl_0_0 00:33:05.630 valid_lft forever preferred_lft forever 00:33:05.630 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:33:05.630 valid_lft forever preferred_lft forever 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:33:05.630 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:05.630 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:33:05.630 altname enp175s0f1np1 00:33:05.630 altname ens801f1np1 00:33:05.630 inet 192.168.100.9/24 scope global cvl_0_1 00:33:05.630 valid_lft forever preferred_lft forever 00:33:05.630 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:33:05.630 valid_lft forever preferred_lft forever 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:05.630 192.168.100.9' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:05.630 192.168.100.9' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:05.630 192.168.100.9' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=78107 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 78107 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 78107 ']' 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:05.630 11:00:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:05.630 [2024-06-10 11:00:33.841666] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:33:05.630 [2024-06-10 11:00:33.841708] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.630 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.630 [2024-06-10 11:00:33.902781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:05.630 [2024-06-10 11:00:33.973547] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.631 [2024-06-10 11:00:33.973591] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.631 [2024-06-10 11:00:33.973597] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.631 [2024-06-10 11:00:33.973603] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.631 [2024-06-10 11:00:33.973608] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.631 [2024-06-10 11:00:33.973656] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.631 [2024-06-10 11:00:33.973659] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.631 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:05.631 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:33:05.631 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:05.631 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:05.631 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:05.888 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.888 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=78107 00:33:05.888 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:05.888 [2024-06-10 11:00:34.837817] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1f612d0/0x1f60910) succeed. 00:33:05.888 [2024-06-10 11:00:34.846637] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1f62580/0x1f60e90) succeed. 00:33:05.888 [2024-06-10 11:00:34.846660] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:33:05.888 11:00:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:06.145 Malloc0 00:33:06.145 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:06.403 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:06.403 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:06.662 [2024-06-10 11:00:35.568227] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:06.662 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:06.920 [2024-06-10 11:00:35.736708] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=78501 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 78501 /var/tmp/bdevperf.sock 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 78501 ']' 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:06.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:06.920 11:00:35 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:07.858 11:00:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:07.858 11:00:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:33:07.858 11:00:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:07.858 11:00:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:08.116 Nvme0n1 00:33:08.116 11:00:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:08.375 Nvme0n1 00:33:08.375 11:00:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:08.375 11:00:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:10.279 11:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:10.279 11:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:33:10.537 11:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:10.795 11:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:11.732 11:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:11.732 11:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:11.732 11:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.732 11:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:11.991 11:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.991 11:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:11.991 11:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.991 11:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.250 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.250 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.250 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.250 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:12.250 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.250 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:12.250 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.250 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:12.509 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.509 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:12.509 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.509 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:12.768 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.768 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:12.768 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:12.768 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.768 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.768 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:12.768 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:13.027 11:00:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:13.294 11:00:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:14.232 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:14.232 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:14.232 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.232 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:14.491 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.491 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:14.491 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.491 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:14.491 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.491 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:14.491 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:14.491 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.749 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.749 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:14.749 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.749 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:15.008 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.008 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:15.008 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.008 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:15.008 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.008 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:15.008 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.008 11:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:15.266 11:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.266 11:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:15.266 11:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:15.524 11:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:33:15.524 11:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:16.901 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:16.901 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:16.901 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.901 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.901 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.902 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:16.902 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.902 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:16.902 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.902 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:16.902 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.902 11:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.161 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.161 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.161 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.161 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.420 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.420 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:17.420 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.420 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.420 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.420 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:17.420 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:17.420 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.679 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.679 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:17.679 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:17.938 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:17.938 11:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:18.906 11:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:18.906 11:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:18.906 11:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.906 11:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.170 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.170 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:19.170 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.170 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:19.429 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.429 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:19.429 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.429 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:19.429 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.429 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:19.429 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.429 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.688 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.688 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:19.688 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.688 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.947 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.947 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:19.947 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.947 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.947 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.947 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:19.947 11:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:33:20.206 11:00:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:20.464 11:00:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:21.400 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:21.400 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:21.400 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.400 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.659 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.659 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:21.659 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.659 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.659 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.659 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.659 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.659 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.918 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.918 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.918 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.918 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:22.177 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.177 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:22.177 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.177 11:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.177 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.177 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:22.177 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.177 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.435 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.435 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:22.435 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:33:22.694 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:22.694 11:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:24.067 11:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:24.067 11:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:24.067 11:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.067 11:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:24.067 11:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.067 11:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:24.067 11:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.067 11:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:24.067 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.067 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:24.067 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.067 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:24.325 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.325 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.325 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.325 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.584 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.584 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:24.584 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.584 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:24.584 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.584 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:24.584 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.584 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.842 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.842 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:25.100 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:25.100 11:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:33:25.100 11:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:25.358 11:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:26.294 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:26.294 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.294 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.294 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.553 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.553 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:26.553 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.553 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.812 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.812 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.812 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.812 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.812 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.812 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.812 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.812 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:27.070 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.070 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:27.070 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.070 11:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.329 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.329 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:27.329 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.329 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:27.329 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.329 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:27.329 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:27.589 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:27.847 11:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:28.781 11:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:28.781 11:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:28.781 11:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.781 11:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:29.039 11:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.039 11:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:29.039 11:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.039 11:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:29.039 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.039 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:29.039 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.039 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:29.297 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.297 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:29.297 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.297 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:29.556 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.556 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:29.556 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.556 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:29.556 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.556 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:29.556 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.556 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:29.814 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.814 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:29.814 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:30.073 11:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:33:30.073 11:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.449 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:31.706 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.706 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:31.706 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.706 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:31.963 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.963 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:31.963 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.963 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:31.963 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.963 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:31.963 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.963 11:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:32.221 11:01:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.221 11:01:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:32.221 11:01:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:32.480 11:01:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:32.480 11:01:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.856 11:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.148 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.148 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:34.148 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.148 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:34.407 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.407 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:34.407 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.407 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:34.407 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.407 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:34.407 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.407 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 78501 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 78501 ']' 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 78501 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 78501 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 78501' 00:33:34.665 killing process with pid 78501 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 78501 00:33:34.665 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 78501 00:33:34.927 Connection closed with partial response: 00:33:34.927 00:33:34.927 00:33:34.927 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 78501 00:33:34.927 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:34.927 [2024-06-10 11:00:35.780575] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:33:34.927 [2024-06-10 11:00:35.780622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78501 ] 00:33:34.927 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.927 [2024-06-10 11:00:35.834270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.927 [2024-06-10 11:00:35.905880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:34.927 Running I/O for 90 seconds... 00:33:34.927 [2024-06-10 11:00:49.085030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0xbc2da74a 00:33:34.927 [2024-06-10 11:00:49.085070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:34.927 [2024-06-10 11:00:49.085266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0xbc2da74a 00:33:34.927 [2024-06-10 11:00:49.085280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:34.927 [2024-06-10 11:00:49.085292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0xbc2da74a 00:33:34.927 [2024-06-10 11:00:49.085300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:34.927 [2024-06-10 11:00:49.085310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.927 [2024-06-10 11:00:49.085317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:34.927 [2024-06-10 11:00:49.085326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.927 [2024-06-10 11:00:49.085333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:34.927 [2024-06-10 11:00:49.085343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.928 [2024-06-10 11:00:49.085710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:34.928 [2024-06-10 11:00:49.085884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0xbc2da74a 00:33:34.928 [2024-06-10 11:00:49.085891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.085900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.085907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.085917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.085923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.085932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.085939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.085948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.085959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.085969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.085975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.085985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.085991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.929 [2024-06-10 11:00:49.086154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.929 [2024-06-10 11:00:49.086170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.929 [2024-06-10 11:00:49.086185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:34.929 [2024-06-10 11:00:49.086421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0xbc2da74a 00:33:34.929 [2024-06-10 11:00:49.086428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.086984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.086999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.087006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.087022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.087029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.087045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.087052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.087067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.087074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:34.930 [2024-06-10 11:00:49.087090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0xbc2da74a 00:33:34.930 [2024-06-10 11:00:49.087099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:00:49.087610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.931 [2024-06-10 11:00:49.087631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.931 [2024-06-10 11:00:49.087653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:00:49.087667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.931 [2024-06-10 11:00:49.095777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:01:01.479954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.931 [2024-06-10 11:01:01.480000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:01:01.480029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.931 [2024-06-10 11:01:01.480037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:01:01.480047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:01:01.480054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:01:01.480064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.931 [2024-06-10 11:01:01.480070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:01:01.480080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0xbc2da74a 00:33:34.931 [2024-06-10 11:01:01.480086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:34.931 [2024-06-10 11:01:01.480096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0xbc2da74a 00:33:34.932 [2024-06-10 11:01:01.480844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.932 [2024-06-10 11:01:01.480906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:34.932 [2024-06-10 11:01:01.480915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.933 [2024-06-10 11:01:01.480922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:34.933 [2024-06-10 11:01:01.480930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.933 [2024-06-10 11:01:01.480936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:34.933 [2024-06-10 11:01:01.480945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0xbc2da74a 00:33:34.933 [2024-06-10 11:01:01.480952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:34.933 [2024-06-10 11:01:01.480967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0xbc2da74a 00:33:34.933 [2024-06-10 11:01:01.480974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:34.933 [2024-06-10 11:01:01.480983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.933 [2024-06-10 11:01:01.480989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:34.933 [2024-06-10 11:01:01.480998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.934 [2024-06-10 11:01:01.481449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:34.934 [2024-06-10 11:01:01.481476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0xbc2da74a 00:33:34.934 [2024-06-10 11:01:01.481483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:34.934 Received shutdown signal, test time was about 26.188362 seconds 00:33:34.934 00:33:34.934 Latency(us) 00:33:34.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.934 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:34.934 Verification LBA range: start 0x0 length 0x4000 00:33:34.934 Nvme0n1 : 26.19 16046.84 62.68 0.00 0.00 7956.60 51.44 3019898.88 00:33:34.934 =================================================================================================================== 00:33:34.934 Total : 16046.84 62.68 0.00 0.00 7956.60 51.44 3019898.88 00:33:34.934 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:35.194 11:01:03 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:35.194 rmmod nvme_rdma 00:33:35.194 rmmod nvme_fabrics 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 78107 ']' 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 78107 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 78107 ']' 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 78107 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 78107 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 78107' 00:33:35.194 killing process with pid 78107 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 78107 00:33:35.194 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 78107 00:33:35.453 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:35.453 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:35.453 00:33:35.453 real 0m36.608s 00:33:35.453 user 1m45.355s 00:33:35.453 sys 0m7.843s 00:33:35.453 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:35.453 11:01:04 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:35.453 ************************************ 00:33:35.453 END TEST nvmf_host_multipath_status 00:33:35.453 ************************************ 00:33:35.453 11:01:04 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:33:35.453 11:01:04 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:35.453 11:01:04 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:35.453 11:01:04 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:35.453 ************************************ 00:33:35.453 START TEST nvmf_discovery_remove_ifc 00:33:35.453 ************************************ 00:33:35.453 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:33:35.453 * Looking for test storage... 00:33:35.453 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:33:35.453 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.453 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:33:35.454 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:33:35.454 00:33:35.454 real 0m0.101s 00:33:35.454 user 0m0.045s 00:33:35.454 sys 0m0.063s 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:35.454 11:01:04 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.454 ************************************ 00:33:35.454 END TEST nvmf_discovery_remove_ifc 00:33:35.454 ************************************ 00:33:35.713 11:01:04 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:33:35.713 11:01:04 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:35.713 11:01:04 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:35.713 11:01:04 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:35.713 ************************************ 00:33:35.713 START TEST nvmf_identify_kernel_target 00:33:35.713 ************************************ 00:33:35.713 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:33:35.713 * Looking for test storage... 00:33:35.713 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:33:35.713 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.713 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:35.713 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.713 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.713 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:35.714 11:01:04 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:42.283 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:42.284 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:42.284 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # modinfo irdma 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:42.284 Found net devices under 0000:af:00.0: cvl_0_0 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:42.284 Found net devices under 0000:af:00.1: cvl_0_1 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:33:42.284 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:42.284 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:33:42.284 altname enp175s0f0np0 00:33:42.284 altname ens801f0np0 00:33:42.284 inet 192.168.100.8/24 scope global cvl_0_0 00:33:42.284 valid_lft forever preferred_lft forever 00:33:42.284 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:33:42.284 valid_lft forever preferred_lft forever 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:33:42.284 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:42.284 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:33:42.284 altname enp175s0f1np1 00:33:42.284 altname ens801f1np1 00:33:42.284 inet 192.168.100.9/24 scope global cvl_0_1 00:33:42.284 valid_lft forever preferred_lft forever 00:33:42.284 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:33:42.284 valid_lft forever preferred_lft forever 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:42.284 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:42.285 192.168.100.9' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:42.285 192.168.100.9' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:42.285 192.168.100.9' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:42.285 11:01:10 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:33:44.186 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:33:44.186 Waiting for block devices as requested 00:33:44.445 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:44.445 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:44.445 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:44.703 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:44.703 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:44.703 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:44.703 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:44.961 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:44.961 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:44.961 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:44.961 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:45.219 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:45.219 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:45.219 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:45.478 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:45.478 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:45.478 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:45.478 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:45.737 No valid GPT data, bailing 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:33:45.737 No valid GPT data, bailing 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n2 ]] 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n2 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ host-managed != none ]] 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # continue 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:45.737 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:33:45.737 00:33:45.737 Discovery Log Number of Records 2, Generation counter 2 00:33:45.737 =====Discovery Log Entry 0====== 00:33:45.737 trtype: rdma 00:33:45.737 adrfam: ipv4 00:33:45.737 subtype: current discovery subsystem 00:33:45.737 treq: not specified, sq flow control disable supported 00:33:45.737 portid: 1 00:33:45.737 trsvcid: 4420 00:33:45.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:45.737 traddr: 192.168.100.8 00:33:45.737 eflags: none 00:33:45.737 rdma_prtype: not specified 00:33:45.737 rdma_qptype: connected 00:33:45.737 rdma_cms: rdma-cm 00:33:45.737 rdma_pkey: 0x0000 00:33:45.737 =====Discovery Log Entry 1====== 00:33:45.737 trtype: rdma 00:33:45.737 adrfam: ipv4 00:33:45.737 subtype: nvme subsystem 00:33:45.737 treq: not specified, sq flow control disable supported 00:33:45.737 portid: 1 00:33:45.737 trsvcid: 4420 00:33:45.737 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:45.737 traddr: 192.168.100.8 00:33:45.737 eflags: none 00:33:45.737 rdma_prtype: not specified 00:33:45.737 rdma_qptype: connected 00:33:45.737 rdma_cms: rdma-cm 00:33:45.737 rdma_pkey: 0x0000 00:33:45.997 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:33:45.997 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:45.997 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.997 ===================================================== 00:33:45.997 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:45.997 ===================================================== 00:33:45.997 Controller Capabilities/Features 00:33:45.997 ================================ 00:33:45.997 Vendor ID: 0000 00:33:45.997 Subsystem Vendor ID: 0000 00:33:45.997 Serial Number: b67bfab8eb8f2bed4eba 00:33:45.997 Model Number: Linux 00:33:45.997 Firmware Version: 6.7.0-68 00:33:45.997 Recommended Arb Burst: 0 00:33:45.997 IEEE OUI Identifier: 00 00 00 00:33:45.997 Multi-path I/O 00:33:45.997 May have multiple subsystem ports: No 00:33:45.997 May have multiple controllers: No 00:33:45.997 Associated with SR-IOV VF: No 00:33:45.997 Max Data Transfer Size: Unlimited 00:33:45.997 Max Number of Namespaces: 0 00:33:45.997 Max Number of I/O Queues: 1024 00:33:45.997 NVMe Specification Version (VS): 1.3 00:33:45.997 NVMe Specification Version (Identify): 1.3 00:33:45.997 Maximum Queue Entries: 128 00:33:45.997 Contiguous Queues Required: No 00:33:45.997 Arbitration Mechanisms Supported 00:33:45.997 Weighted Round Robin: Not Supported 00:33:45.997 Vendor Specific: Not Supported 00:33:45.997 Reset Timeout: 7500 ms 00:33:45.997 Doorbell Stride: 4 bytes 00:33:45.997 NVM Subsystem Reset: Not Supported 00:33:45.997 Command Sets Supported 00:33:45.997 NVM Command Set: Supported 00:33:45.997 Boot Partition: Not Supported 00:33:45.997 Memory Page Size Minimum: 4096 bytes 00:33:45.997 Memory Page Size Maximum: 4096 bytes 00:33:45.997 Persistent Memory Region: Not Supported 00:33:45.997 Optional Asynchronous Events Supported 00:33:45.997 Namespace Attribute Notices: Not Supported 00:33:45.997 Firmware Activation Notices: Not Supported 00:33:45.997 ANA Change Notices: Not Supported 00:33:45.997 PLE Aggregate Log Change Notices: Not Supported 00:33:45.997 LBA Status Info Alert Notices: Not Supported 00:33:45.997 EGE Aggregate Log Change Notices: Not Supported 00:33:45.997 Normal NVM Subsystem Shutdown event: Not Supported 00:33:45.997 Zone Descriptor Change Notices: Not Supported 00:33:45.997 Discovery Log Change Notices: Supported 00:33:45.997 Controller Attributes 00:33:45.997 128-bit Host Identifier: Not Supported 00:33:45.997 Non-Operational Permissive Mode: Not Supported 00:33:45.997 NVM Sets: Not Supported 00:33:45.997 Read Recovery Levels: Not Supported 00:33:45.997 Endurance Groups: Not Supported 00:33:45.997 Predictable Latency Mode: Not Supported 00:33:45.997 Traffic Based Keep ALive: Not Supported 00:33:45.997 Namespace Granularity: Not Supported 00:33:45.997 SQ Associations: Not Supported 00:33:45.997 UUID List: Not Supported 00:33:45.997 Multi-Domain Subsystem: Not Supported 00:33:45.997 Fixed Capacity Management: Not Supported 00:33:45.997 Variable Capacity Management: Not Supported 00:33:45.997 Delete Endurance Group: Not Supported 00:33:45.997 Delete NVM Set: Not Supported 00:33:45.997 Extended LBA Formats Supported: Not Supported 00:33:45.997 Flexible Data Placement Supported: Not Supported 00:33:45.997 00:33:45.997 Controller Memory Buffer Support 00:33:45.997 ================================ 00:33:45.997 Supported: No 00:33:45.997 00:33:45.997 Persistent Memory Region Support 00:33:45.997 ================================ 00:33:45.997 Supported: No 00:33:45.997 00:33:45.997 Admin Command Set Attributes 00:33:45.997 ============================ 00:33:45.997 Security Send/Receive: Not Supported 00:33:45.997 Format NVM: Not Supported 00:33:45.997 Firmware Activate/Download: Not Supported 00:33:45.997 Namespace Management: Not Supported 00:33:45.997 Device Self-Test: Not Supported 00:33:45.997 Directives: Not Supported 00:33:45.997 NVMe-MI: Not Supported 00:33:45.997 Virtualization Management: Not Supported 00:33:45.997 Doorbell Buffer Config: Not Supported 00:33:45.997 Get LBA Status Capability: Not Supported 00:33:45.997 Command & Feature Lockdown Capability: Not Supported 00:33:45.997 Abort Command Limit: 1 00:33:45.997 Async Event Request Limit: 1 00:33:45.997 Number of Firmware Slots: N/A 00:33:45.997 Firmware Slot 1 Read-Only: N/A 00:33:45.997 Firmware Activation Without Reset: N/A 00:33:45.997 Multiple Update Detection Support: N/A 00:33:45.997 Firmware Update Granularity: No Information Provided 00:33:45.997 Per-Namespace SMART Log: No 00:33:45.997 Asymmetric Namespace Access Log Page: Not Supported 00:33:45.997 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:45.997 Command Effects Log Page: Not Supported 00:33:45.997 Get Log Page Extended Data: Supported 00:33:45.997 Telemetry Log Pages: Not Supported 00:33:45.997 Persistent Event Log Pages: Not Supported 00:33:45.997 Supported Log Pages Log Page: May Support 00:33:45.997 Commands Supported & Effects Log Page: Not Supported 00:33:45.997 Feature Identifiers & Effects Log Page:May Support 00:33:45.997 NVMe-MI Commands & Effects Log Page: May Support 00:33:45.997 Data Area 4 for Telemetry Log: Not Supported 00:33:45.997 Error Log Page Entries Supported: 1 00:33:45.997 Keep Alive: Not Supported 00:33:45.997 00:33:45.997 NVM Command Set Attributes 00:33:45.997 ========================== 00:33:45.997 Submission Queue Entry Size 00:33:45.997 Max: 1 00:33:45.997 Min: 1 00:33:45.997 Completion Queue Entry Size 00:33:45.997 Max: 1 00:33:45.997 Min: 1 00:33:45.997 Number of Namespaces: 0 00:33:45.997 Compare Command: Not Supported 00:33:45.997 Write Uncorrectable Command: Not Supported 00:33:45.997 Dataset Management Command: Not Supported 00:33:45.997 Write Zeroes Command: Not Supported 00:33:45.997 Set Features Save Field: Not Supported 00:33:45.997 Reservations: Not Supported 00:33:45.997 Timestamp: Not Supported 00:33:45.997 Copy: Not Supported 00:33:45.997 Volatile Write Cache: Not Present 00:33:45.997 Atomic Write Unit (Normal): 1 00:33:45.997 Atomic Write Unit (PFail): 1 00:33:45.997 Atomic Compare & Write Unit: 1 00:33:45.997 Fused Compare & Write: Not Supported 00:33:45.997 Scatter-Gather List 00:33:45.997 SGL Command Set: Supported 00:33:45.997 SGL Keyed: Supported 00:33:45.997 SGL Bit Bucket Descriptor: Not Supported 00:33:45.997 SGL Metadata Pointer: Not Supported 00:33:45.997 Oversized SGL: Not Supported 00:33:45.997 SGL Metadata Address: Not Supported 00:33:45.997 SGL Offset: Supported 00:33:45.997 Transport SGL Data Block: Not Supported 00:33:45.997 Replay Protected Memory Block: Not Supported 00:33:45.997 00:33:45.997 Firmware Slot Information 00:33:45.997 ========================= 00:33:45.997 Active slot: 0 00:33:45.997 00:33:45.997 00:33:45.997 Error Log 00:33:45.997 ========= 00:33:45.997 00:33:45.997 Active Namespaces 00:33:45.997 ================= 00:33:45.997 Discovery Log Page 00:33:45.997 ================== 00:33:45.997 Generation Counter: 2 00:33:45.997 Number of Records: 2 00:33:45.997 Record Format: 0 00:33:45.997 00:33:45.997 Discovery Log Entry 0 00:33:45.997 ---------------------- 00:33:45.997 Transport Type: 1 (RDMA) 00:33:45.997 Address Family: 1 (IPv4) 00:33:45.997 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:45.997 Entry Flags: 00:33:45.997 Duplicate Returned Information: 0 00:33:45.997 Explicit Persistent Connection Support for Discovery: 0 00:33:45.997 Transport Requirements: 00:33:45.997 Secure Channel: Not Specified 00:33:45.997 Port ID: 1 (0x0001) 00:33:45.997 Controller ID: 65535 (0xffff) 00:33:45.997 Admin Max SQ Size: 32 00:33:45.997 Transport Service Identifier: 4420 00:33:45.997 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:45.997 Transport Address: 192.168.100.8 00:33:45.997 Transport Specific Address Subtype - RDMA 00:33:45.997 RDMA QP Service Type: 1 (Reliable Connected) 00:33:45.998 RDMA Provider Type: 1 (No provider specified) 00:33:45.998 RDMA CM Service: 1 (RDMA_CM) 00:33:45.998 Discovery Log Entry 1 00:33:45.998 ---------------------- 00:33:45.998 Transport Type: 1 (RDMA) 00:33:45.998 Address Family: 1 (IPv4) 00:33:45.998 Subsystem Type: 2 (NVM Subsystem) 00:33:45.998 Entry Flags: 00:33:45.998 Duplicate Returned Information: 0 00:33:45.998 Explicit Persistent Connection Support for Discovery: 0 00:33:45.998 Transport Requirements: 00:33:45.998 Secure Channel: Not Specified 00:33:45.998 Port ID: 1 (0x0001) 00:33:45.998 Controller ID: 65535 (0xffff) 00:33:45.998 Admin Max SQ Size: 32 00:33:45.998 Transport Service Identifier: 4420 00:33:45.998 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:45.998 Transport Address: 192.168.100.8 00:33:45.998 Transport Specific Address Subtype - RDMA 00:33:45.998 RDMA QP Service Type: 1 (Reliable Connected) 00:33:45.998 RDMA Provider Type: 1 (No provider specified) 00:33:45.998 RDMA CM Service: 1 (RDMA_CM) 00:33:45.998 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:45.998 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.998 get_feature(0x01) failed 00:33:45.998 get_feature(0x02) failed 00:33:45.998 get_feature(0x04) failed 00:33:45.998 ===================================================== 00:33:45.998 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:33:45.998 ===================================================== 00:33:45.998 Controller Capabilities/Features 00:33:45.998 ================================ 00:33:45.998 Vendor ID: 0000 00:33:45.998 Subsystem Vendor ID: 0000 00:33:45.998 Serial Number: f935b19c5a765a98de0c 00:33:45.998 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:45.998 Firmware Version: 6.7.0-68 00:33:45.998 Recommended Arb Burst: 6 00:33:45.998 IEEE OUI Identifier: 00 00 00 00:33:45.998 Multi-path I/O 00:33:45.998 May have multiple subsystem ports: Yes 00:33:45.998 May have multiple controllers: Yes 00:33:45.998 Associated with SR-IOV VF: No 00:33:45.998 Max Data Transfer Size: 1048576 00:33:45.998 Max Number of Namespaces: 1024 00:33:45.998 Max Number of I/O Queues: 128 00:33:45.998 NVMe Specification Version (VS): 1.3 00:33:45.998 NVMe Specification Version (Identify): 1.3 00:33:45.998 Maximum Queue Entries: 128 00:33:45.998 Contiguous Queues Required: No 00:33:45.998 Arbitration Mechanisms Supported 00:33:45.998 Weighted Round Robin: Not Supported 00:33:45.998 Vendor Specific: Not Supported 00:33:45.998 Reset Timeout: 7500 ms 00:33:45.998 Doorbell Stride: 4 bytes 00:33:45.998 NVM Subsystem Reset: Not Supported 00:33:45.998 Command Sets Supported 00:33:45.998 NVM Command Set: Supported 00:33:45.998 Boot Partition: Not Supported 00:33:45.998 Memory Page Size Minimum: 4096 bytes 00:33:45.998 Memory Page Size Maximum: 4096 bytes 00:33:45.998 Persistent Memory Region: Not Supported 00:33:45.998 Optional Asynchronous Events Supported 00:33:45.998 Namespace Attribute Notices: Supported 00:33:45.998 Firmware Activation Notices: Not Supported 00:33:45.998 ANA Change Notices: Supported 00:33:45.998 PLE Aggregate Log Change Notices: Not Supported 00:33:45.998 LBA Status Info Alert Notices: Not Supported 00:33:45.998 EGE Aggregate Log Change Notices: Not Supported 00:33:45.998 Normal NVM Subsystem Shutdown event: Not Supported 00:33:45.998 Zone Descriptor Change Notices: Not Supported 00:33:45.998 Discovery Log Change Notices: Not Supported 00:33:45.998 Controller Attributes 00:33:45.998 128-bit Host Identifier: Supported 00:33:45.998 Non-Operational Permissive Mode: Not Supported 00:33:45.998 NVM Sets: Not Supported 00:33:45.998 Read Recovery Levels: Not Supported 00:33:45.998 Endurance Groups: Not Supported 00:33:45.998 Predictable Latency Mode: Not Supported 00:33:45.998 Traffic Based Keep ALive: Supported 00:33:45.998 Namespace Granularity: Not Supported 00:33:45.998 SQ Associations: Not Supported 00:33:45.998 UUID List: Not Supported 00:33:45.998 Multi-Domain Subsystem: Not Supported 00:33:45.998 Fixed Capacity Management: Not Supported 00:33:45.998 Variable Capacity Management: Not Supported 00:33:45.998 Delete Endurance Group: Not Supported 00:33:45.998 Delete NVM Set: Not Supported 00:33:45.998 Extended LBA Formats Supported: Not Supported 00:33:45.998 Flexible Data Placement Supported: Not Supported 00:33:45.998 00:33:45.998 Controller Memory Buffer Support 00:33:45.998 ================================ 00:33:45.998 Supported: No 00:33:45.998 00:33:45.998 Persistent Memory Region Support 00:33:45.998 ================================ 00:33:45.998 Supported: No 00:33:45.998 00:33:45.998 Admin Command Set Attributes 00:33:45.998 ============================ 00:33:45.998 Security Send/Receive: Not Supported 00:33:45.998 Format NVM: Not Supported 00:33:45.998 Firmware Activate/Download: Not Supported 00:33:45.998 Namespace Management: Not Supported 00:33:45.998 Device Self-Test: Not Supported 00:33:45.998 Directives: Not Supported 00:33:45.998 NVMe-MI: Not Supported 00:33:45.998 Virtualization Management: Not Supported 00:33:45.998 Doorbell Buffer Config: Not Supported 00:33:45.998 Get LBA Status Capability: Not Supported 00:33:45.998 Command & Feature Lockdown Capability: Not Supported 00:33:45.998 Abort Command Limit: 4 00:33:45.998 Async Event Request Limit: 4 00:33:45.998 Number of Firmware Slots: N/A 00:33:45.998 Firmware Slot 1 Read-Only: N/A 00:33:45.998 Firmware Activation Without Reset: N/A 00:33:45.998 Multiple Update Detection Support: N/A 00:33:45.998 Firmware Update Granularity: No Information Provided 00:33:45.998 Per-Namespace SMART Log: Yes 00:33:45.998 Asymmetric Namespace Access Log Page: Supported 00:33:45.998 ANA Transition Time : 10 sec 00:33:45.998 00:33:45.998 Asymmetric Namespace Access Capabilities 00:33:45.998 ANA Optimized State : Supported 00:33:45.998 ANA Non-Optimized State : Supported 00:33:45.998 ANA Inaccessible State : Supported 00:33:45.998 ANA Persistent Loss State : Supported 00:33:45.998 ANA Change State : Supported 00:33:45.998 ANAGRPID is not changed : No 00:33:45.998 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:45.998 00:33:45.998 ANA Group Identifier Maximum : 128 00:33:45.998 Number of ANA Group Identifiers : 128 00:33:45.998 Max Number of Allowed Namespaces : 1024 00:33:45.998 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:45.998 Command Effects Log Page: Supported 00:33:45.998 Get Log Page Extended Data: Supported 00:33:45.998 Telemetry Log Pages: Not Supported 00:33:45.998 Persistent Event Log Pages: Not Supported 00:33:45.998 Supported Log Pages Log Page: May Support 00:33:45.998 Commands Supported & Effects Log Page: Not Supported 00:33:45.998 Feature Identifiers & Effects Log Page:May Support 00:33:45.998 NVMe-MI Commands & Effects Log Page: May Support 00:33:45.998 Data Area 4 for Telemetry Log: Not Supported 00:33:45.998 Error Log Page Entries Supported: 128 00:33:45.998 Keep Alive: Supported 00:33:45.998 Keep Alive Granularity: 1000 ms 00:33:45.998 00:33:45.998 NVM Command Set Attributes 00:33:45.998 ========================== 00:33:45.998 Submission Queue Entry Size 00:33:45.998 Max: 64 00:33:45.998 Min: 64 00:33:45.998 Completion Queue Entry Size 00:33:45.998 Max: 16 00:33:45.998 Min: 16 00:33:45.998 Number of Namespaces: 1024 00:33:45.998 Compare Command: Not Supported 00:33:45.998 Write Uncorrectable Command: Not Supported 00:33:45.998 Dataset Management Command: Supported 00:33:45.998 Write Zeroes Command: Supported 00:33:45.998 Set Features Save Field: Not Supported 00:33:45.998 Reservations: Not Supported 00:33:45.998 Timestamp: Not Supported 00:33:45.998 Copy: Not Supported 00:33:45.998 Volatile Write Cache: Present 00:33:45.998 Atomic Write Unit (Normal): 1 00:33:45.999 Atomic Write Unit (PFail): 1 00:33:45.999 Atomic Compare & Write Unit: 1 00:33:45.999 Fused Compare & Write: Not Supported 00:33:45.999 Scatter-Gather List 00:33:45.999 SGL Command Set: Supported 00:33:45.999 SGL Keyed: Supported 00:33:45.999 SGL Bit Bucket Descriptor: Not Supported 00:33:45.999 SGL Metadata Pointer: Not Supported 00:33:45.999 Oversized SGL: Not Supported 00:33:45.999 SGL Metadata Address: Not Supported 00:33:45.999 SGL Offset: Supported 00:33:45.999 Transport SGL Data Block: Not Supported 00:33:45.999 Replay Protected Memory Block: Not Supported 00:33:45.999 00:33:45.999 Firmware Slot Information 00:33:45.999 ========================= 00:33:45.999 Active slot: 0 00:33:45.999 00:33:45.999 Asymmetric Namespace Access 00:33:45.999 =========================== 00:33:45.999 Change Count : 0 00:33:45.999 Number of ANA Group Descriptors : 1 00:33:45.999 ANA Group Descriptor : 0 00:33:45.999 ANA Group ID : 1 00:33:45.999 Number of NSID Values : 1 00:33:45.999 Change Count : 0 00:33:45.999 ANA State : 1 00:33:45.999 Namespace Identifier : 1 00:33:45.999 00:33:45.999 Commands Supported and Effects 00:33:45.999 ============================== 00:33:45.999 Admin Commands 00:33:45.999 -------------- 00:33:45.999 Get Log Page (02h): Supported 00:33:45.999 Identify (06h): Supported 00:33:45.999 Abort (08h): Supported 00:33:45.999 Set Features (09h): Supported 00:33:45.999 Get Features (0Ah): Supported 00:33:45.999 Asynchronous Event Request (0Ch): Supported 00:33:45.999 Keep Alive (18h): Supported 00:33:45.999 I/O Commands 00:33:45.999 ------------ 00:33:45.999 Flush (00h): Supported 00:33:45.999 Write (01h): Supported LBA-Change 00:33:45.999 Read (02h): Supported 00:33:45.999 Write Zeroes (08h): Supported LBA-Change 00:33:45.999 Dataset Management (09h): Supported 00:33:45.999 00:33:45.999 Error Log 00:33:45.999 ========= 00:33:45.999 Entry: 0 00:33:45.999 Error Count: 0x3 00:33:45.999 Submission Queue Id: 0x0 00:33:45.999 Command Id: 0x5 00:33:45.999 Phase Bit: 0 00:33:45.999 Status Code: 0x2 00:33:45.999 Status Code Type: 0x0 00:33:45.999 Do Not Retry: 1 00:33:45.999 Error Location: 0x28 00:33:45.999 LBA: 0x0 00:33:45.999 Namespace: 0x0 00:33:45.999 Vendor Log Page: 0x0 00:33:45.999 ----------- 00:33:45.999 Entry: 1 00:33:45.999 Error Count: 0x2 00:33:45.999 Submission Queue Id: 0x0 00:33:45.999 Command Id: 0x5 00:33:45.999 Phase Bit: 0 00:33:45.999 Status Code: 0x2 00:33:45.999 Status Code Type: 0x0 00:33:45.999 Do Not Retry: 1 00:33:45.999 Error Location: 0x28 00:33:45.999 LBA: 0x0 00:33:45.999 Namespace: 0x0 00:33:45.999 Vendor Log Page: 0x0 00:33:45.999 ----------- 00:33:45.999 Entry: 2 00:33:45.999 Error Count: 0x1 00:33:45.999 Submission Queue Id: 0x0 00:33:45.999 Command Id: 0x0 00:33:45.999 Phase Bit: 0 00:33:45.999 Status Code: 0x2 00:33:45.999 Status Code Type: 0x0 00:33:45.999 Do Not Retry: 1 00:33:45.999 Error Location: 0x28 00:33:45.999 LBA: 0x0 00:33:45.999 Namespace: 0x0 00:33:45.999 Vendor Log Page: 0x0 00:33:45.999 00:33:45.999 Number of Queues 00:33:45.999 ================ 00:33:45.999 Number of I/O Submission Queues: 128 00:33:45.999 Number of I/O Completion Queues: 128 00:33:45.999 00:33:45.999 ZNS Specific Controller Data 00:33:45.999 ============================ 00:33:45.999 Zone Append Size Limit: 0 00:33:45.999 00:33:45.999 00:33:45.999 Active Namespaces 00:33:45.999 ================= 00:33:45.999 get_feature(0x05) failed 00:33:45.999 Namespace ID:1 00:33:45.999 Command Set Identifier: NVM (00h) 00:33:45.999 Deallocate: Supported 00:33:45.999 Deallocated/Unwritten Error: Not Supported 00:33:45.999 Deallocated Read Value: Unknown 00:33:45.999 Deallocate in Write Zeroes: Not Supported 00:33:45.999 Deallocated Guard Field: 0xFFFF 00:33:45.999 Flush: Supported 00:33:45.999 Reservation: Not Supported 00:33:45.999 Namespace Sharing Capabilities: Multiple Controllers 00:33:45.999 Size (in LBAs): 4194304 (2GiB) 00:33:45.999 Capacity (in LBAs): 4194304 (2GiB) 00:33:45.999 Utilization (in LBAs): 4194304 (2GiB) 00:33:45.999 UUID: 97c68b30-53a4-4ceb-bc12-4026b0fa7a76 00:33:45.999 Thin Provisioning: Not Supported 00:33:45.999 Per-NS Atomic Units: Yes 00:33:45.999 Atomic Boundary Size (Normal): 0 00:33:45.999 Atomic Boundary Size (PFail): 0 00:33:45.999 Atomic Boundary Offset: 0 00:33:45.999 NGUID/EUI64 Never Reused: No 00:33:45.999 ANA group ID: 1 00:33:45.999 Namespace Write Protected: No 00:33:45.999 Number of LBA Formats: 1 00:33:45.999 Current LBA Format: LBA Format #00 00:33:45.999 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:45.999 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:45.999 rmmod nvme_rdma 00:33:45.999 rmmod nvme_fabrics 00:33:45.999 11:01:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:45.999 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:46.258 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:46.258 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:46.258 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:46.258 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:46.258 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:33:46.258 11:01:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:33:48.787 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:33:49.045 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:49.045 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:49.303 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:49.303 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:49.303 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:49.303 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:49.869 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:50.127 00:33:50.127 real 0m14.460s 00:33:50.127 user 0m4.225s 00:33:50.127 sys 0m8.625s 00:33:50.127 11:01:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:50.127 11:01:19 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.127 ************************************ 00:33:50.127 END TEST nvmf_identify_kernel_target 00:33:50.127 ************************************ 00:33:50.127 11:01:19 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:33:50.128 11:01:19 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:50.128 11:01:19 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:50.128 11:01:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:50.128 ************************************ 00:33:50.128 START TEST nvmf_auth_host 00:33:50.128 ************************************ 00:33:50.128 11:01:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:33:50.128 * Looking for test storage... 00:33:50.128 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:33:50.128 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.128 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:50.128 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.128 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.128 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.128 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:50.387 11:01:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:56.950 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:56.950 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@377 -- # modinfo irdma 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:56.950 Found net devices under 0000:af:00.0: cvl_0_0 00:33:56.950 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:56.951 Found net devices under 0000:af:00.1: cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo cvl_0_0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:33:56.951 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:56.951 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:33:56.951 altname enp175s0f0np0 00:33:56.951 altname ens801f0np0 00:33:56.951 inet 192.168.100.8/24 scope global cvl_0_0 00:33:56.951 valid_lft forever preferred_lft forever 00:33:56.951 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:33:56.951 valid_lft forever preferred_lft forever 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:33:56.951 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:33:56.951 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:33:56.951 altname enp175s0f1np1 00:33:56.951 altname ens801f1np1 00:33:56.951 inet 192.168.100.9/24 scope global cvl_0_1 00:33:56.951 valid_lft forever preferred_lft forever 00:33:56.951 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:33:56.951 valid_lft forever preferred_lft forever 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo cvl_0_0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:56.951 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:56.952 192.168.100.9' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:56.952 192.168.100.9' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:56.952 192.168.100.9' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=93152 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 93152 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 93152 ']' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:56.952 11:01:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b5dda7df82ec343ff37d5caac582d151 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LzJ 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b5dda7df82ec343ff37d5caac582d151 0 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b5dda7df82ec343ff37d5caac582d151 0 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b5dda7df82ec343ff37d5caac582d151 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LzJ 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LzJ 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.LzJ 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e363d5fe64b4c0832c4798a0aef033920d372435f0e98506e76ff1a20434a0ba 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CNe 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e363d5fe64b4c0832c4798a0aef033920d372435f0e98506e76ff1a20434a0ba 3 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e363d5fe64b4c0832c4798a0aef033920d372435f0e98506e76ff1a20434a0ba 3 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e363d5fe64b4c0832c4798a0aef033920d372435f0e98506e76ff1a20434a0ba 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:57.251 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CNe 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CNe 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.CNe 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b39f00a3de2c2ba9e6274179c9f0c33fa48f81c986dfb52f 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Lcs 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b39f00a3de2c2ba9e6274179c9f0c33fa48f81c986dfb52f 0 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b39f00a3de2c2ba9e6274179c9f0c33fa48f81c986dfb52f 0 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b39f00a3de2c2ba9e6274179c9f0c33fa48f81c986dfb52f 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Lcs 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Lcs 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Lcs 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8df9ef8fd527c11b0671b4eb2c9504d52521a73d57eca99a 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.y1K 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8df9ef8fd527c11b0671b4eb2c9504d52521a73d57eca99a 2 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8df9ef8fd527c11b0671b4eb2c9504d52521a73d57eca99a 2 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8df9ef8fd527c11b0671b4eb2c9504d52521a73d57eca99a 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.y1K 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.y1K 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.y1K 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.522 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c3fcae168e81638b70ec95e5a46cd8a1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.cHW 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c3fcae168e81638b70ec95e5a46cd8a1 1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c3fcae168e81638b70ec95e5a46cd8a1 1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c3fcae168e81638b70ec95e5a46cd8a1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.cHW 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.cHW 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.cHW 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=45c94e74f69bb1a221bca2ec71d16a80 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dwL 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 45c94e74f69bb1a221bca2ec71d16a80 1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 45c94e74f69bb1a221bca2ec71d16a80 1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=45c94e74f69bb1a221bca2ec71d16a80 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dwL 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dwL 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.dwL 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=beb32c1dc16013da0423f4112f2614d34ab6c372ba97f5f5 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ffY 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key beb32c1dc16013da0423f4112f2614d34ab6c372ba97f5f5 2 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 beb32c1dc16013da0423f4112f2614d34ab6c372ba97f5f5 2 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=beb32c1dc16013da0423f4112f2614d34ab6c372ba97f5f5 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:57.523 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.781 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ffY 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ffY 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ffY 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7638f492bcf89515af3ad4957d75bd22 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cL5 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7638f492bcf89515af3ad4957d75bd22 0 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7638f492bcf89515af3ad4957d75bd22 0 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7638f492bcf89515af3ad4957d75bd22 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cL5 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cL5 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.cL5 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90c8578a5b4c89fc046dae8474aa57b5ef5fba7649b6f36a040540063bf93818 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1D2 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90c8578a5b4c89fc046dae8474aa57b5ef5fba7649b6f36a040540063bf93818 3 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90c8578a5b4c89fc046dae8474aa57b5ef5fba7649b6f36a040540063bf93818 3 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90c8578a5b4c89fc046dae8474aa57b5ef5fba7649b6f36a040540063bf93818 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1D2 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1D2 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1D2 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93152 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 93152 ']' 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:57.782 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LzJ 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.CNe ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CNe 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Lcs 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.y1K ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.y1K 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.cHW 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.dwL ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dwL 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ffY 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.cL5 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.cL5 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1D2 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:58.040 11:01:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:34:00.571 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:00.830 Waiting for block devices as requested 00:34:00.830 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:01.089 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:01.089 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:01.089 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:01.348 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:01.348 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:01.348 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:01.348 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:01.607 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:01.607 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:01.607 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:01.866 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:01.866 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:01.866 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:01.866 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:02.124 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:02.124 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:02.691 No valid GPT data, bailing 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:34:02.691 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:34:02.692 No valid GPT data, bailing 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n2 ]] 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n2 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ host-managed != none ]] 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # continue 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:02.692 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:34:02.951 00:34:02.951 Discovery Log Number of Records 2, Generation counter 2 00:34:02.951 =====Discovery Log Entry 0====== 00:34:02.951 trtype: rdma 00:34:02.951 adrfam: ipv4 00:34:02.951 subtype: current discovery subsystem 00:34:02.951 treq: not specified, sq flow control disable supported 00:34:02.951 portid: 1 00:34:02.951 trsvcid: 4420 00:34:02.951 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:02.951 traddr: 192.168.100.8 00:34:02.951 eflags: none 00:34:02.951 rdma_prtype: not specified 00:34:02.951 rdma_qptype: connected 00:34:02.951 rdma_cms: rdma-cm 00:34:02.951 rdma_pkey: 0x0000 00:34:02.951 =====Discovery Log Entry 1====== 00:34:02.951 trtype: rdma 00:34:02.951 adrfam: ipv4 00:34:02.951 subtype: nvme subsystem 00:34:02.951 treq: not specified, sq flow control disable supported 00:34:02.951 portid: 1 00:34:02.951 trsvcid: 4420 00:34:02.951 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:02.951 traddr: 192.168.100.8 00:34:02.951 eflags: none 00:34:02.951 rdma_prtype: not specified 00:34:02.951 rdma_qptype: connected 00:34:02.951 rdma_cms: rdma-cm 00:34:02.951 rdma_pkey: 0x0000 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.951 11:01:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.217 nvme0n1 00:34:03.217 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.217 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.217 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.217 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.217 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.218 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.219 nvme0n1 00:34:03.219 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.479 nvme0n1 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.479 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.736 nvme0n1 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.736 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.994 nvme0n1 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.994 11:01:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.994 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.994 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.994 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.994 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.252 nvme0n1 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:04.252 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.509 nvme0n1 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.509 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.510 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.769 nvme0n1 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.769 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.029 nvme0n1 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.029 11:01:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.029 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.288 nvme0n1 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:05.288 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:05.289 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:05.289 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:05.289 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:05.289 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.289 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.548 nvme0n1 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.548 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.549 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.808 nvme0n1 00:34:05.808 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.808 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.808 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.808 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.808 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.808 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:06.068 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:06.069 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:06.069 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:06.069 11:01:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:06.069 11:01:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.069 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.069 11:01:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.328 nvme0n1 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.328 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.587 nvme0n1 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.587 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.846 nvme0n1 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.846 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.847 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.105 11:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.106 nvme0n1 00:34:07.106 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.106 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.106 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.106 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.106 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.106 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.364 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.365 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.624 nvme0n1 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:07.624 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:07.625 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:07.625 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:07.625 11:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:07.625 11:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.883 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.883 11:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.142 nvme0n1 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.142 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:08.143 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:08.143 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:08.143 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:08.143 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:08.143 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.143 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.143 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.711 nvme0n1 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:08.711 11:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:08.712 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.712 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.712 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.970 nvme0n1 00:34:08.970 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.970 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:08.971 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.229 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.229 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:09.229 11:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.229 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.488 nvme0n1 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.488 11:01:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.056 nvme0n1 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.056 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.315 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 nvme0n1 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.883 11:01:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.451 nvme0n1 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.451 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.026 nvme0n1 00:34:12.026 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.026 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.026 11:01:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.026 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.026 11:01:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.026 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.026 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.026 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.026 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.026 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.325 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.892 nvme0n1 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.892 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.893 nvme0n1 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.893 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.152 11:01:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.152 nvme0n1 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.152 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.411 nvme0n1 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.411 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.670 nvme0n1 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.670 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.671 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.929 nvme0n1 00:34:13.929 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.929 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.929 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.929 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.929 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.929 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.929 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.930 11:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.189 nvme0n1 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.189 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.448 nvme0n1 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:14.448 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:14.449 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:14.449 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:14.449 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.449 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.708 nvme0n1 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.708 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.967 nvme0n1 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.967 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.968 11:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.227 nvme0n1 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.227 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.486 nvme0n1 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.486 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:15.487 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:15.487 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:15.487 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:15.487 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:15.487 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.487 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.487 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.746 nvme0n1 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.746 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.005 11:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.005 nvme0n1 00:34:16.005 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.005 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.005 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.005 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.005 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.264 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.264 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.264 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.264 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.264 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.264 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.264 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.265 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 nvme0n1 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:16.528 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.529 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.795 nvme0n1 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.795 11:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.363 nvme0n1 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.363 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.364 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.623 nvme0n1 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:17.623 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.881 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.882 11:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.140 nvme0n1 00:34:18.140 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.140 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.141 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.709 nvme0n1 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.709 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.968 nvme0n1 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.968 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.227 11:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.227 11:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.227 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.227 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.227 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.227 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.227 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.227 11:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:19.227 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:19.227 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:19.227 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:19.227 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:19.227 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.227 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.227 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.794 nvme0n1 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.794 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.795 11:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.361 nvme0n1 00:34:20.361 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.361 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.361 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.362 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.928 nvme0n1 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.928 11:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.494 nvme0n1 00:34:21.494 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.494 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.494 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.494 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.494 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.494 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.753 11:01:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.320 nvme0n1 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.320 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.578 nvme0n1 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.578 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.579 nvme0n1 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.579 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.837 nvme0n1 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.837 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.096 11:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.096 nvme0n1 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:23.096 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.097 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 nvme0n1 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.354 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.612 nvme0n1 00:34:23.612 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.612 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.612 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.612 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.612 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.612 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.612 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.612 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.613 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.871 nvme0n1 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.871 11:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.129 nvme0n1 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.129 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.388 nvme0n1 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.388 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.647 nvme0n1 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.647 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.906 nvme0n1 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.906 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.165 11:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.423 nvme0n1 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:25.423 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.424 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.424 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.682 nvme0n1 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.682 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.967 nvme0n1 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.967 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.968 11:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.226 nvme0n1 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.226 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.485 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.744 nvme0n1 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.744 11:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.311 nvme0n1 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.311 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.569 nvme0n1 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.569 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.828 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.087 nvme0n1 00:34:28.087 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.087 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.087 11:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.087 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.087 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.087 11:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.087 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.655 nvme0n1 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVkZGE3ZGY4MmVjMzQzZmYzN2Q1Y2FhYzU4MmQxNTHYzzc8: 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: ]] 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2M2Q1ZmU2NGI0YzA4MzJjNDc5OGEwYWVmMDMzOTIwZDM3MjQzNWYwZTk4NTA2ZTc2ZmYxYTIwNDM0YTBiYfM5fs4=: 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:28.655 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:28.656 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:28.656 11:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:28.656 11:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.656 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.656 11:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.222 nvme0n1 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.222 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.223 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.790 nvme0n1 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzNmY2FlMTY4ZTgxNjM4YjcwZWM5NWU1YTQ2Y2Q4YTFg5xUi: 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVjOTRlNzRmNjliYjFhMjIxYmNhMmVjNzFkMTZhODBaDnNk: 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.791 11:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.357 nvme0n1 00:34:30.357 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.357 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.357 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.357 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.357 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.357 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmViMzJjMWRjMTYwMTNkYTA0MjNmNDExMmYyNjE0ZDM0YWI2YzM3MmJhOTdmNWY1sOZ6iA==: 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: ]] 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzYzOGY0OTJiY2Y4OTUxNWFmM2FkNDk1N2Q3NWJkMjKRRF56: 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:30.616 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.183 nvme0n1 00:34:31.183 11:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTBjODU3OGE1YjRjODlmYzA0NmRhZTg0NzRhYTU3YjVlZjVmYmE3NjQ5YjZmMzZhMDQwNTQwMDYzYmY5MzgxOKXPL+0=: 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.183 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.749 nvme0n1 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.749 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjM5ZjAwYTNkZTJjMmJhOWU2Mjc0MTc5YzlmMGMzM2ZhNDhmODFjOTg2ZGZiNTJmXFHLDg==: 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: ]] 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGRmOWVmOGZkNTI3YzExYjA2NzFiNGViMmM5NTA0ZDUyNTIxYTczZDU3ZWNhOTlhVf4s4Q==: 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:31.750 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 request: 00:34:32.007 { 00:34:32.007 "name": "nvme0", 00:34:32.007 "trtype": "rdma", 00:34:32.007 "traddr": "192.168.100.8", 00:34:32.007 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:32.007 "adrfam": "ipv4", 00:34:32.007 "trsvcid": "4420", 00:34:32.007 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:32.007 "method": "bdev_nvme_attach_controller", 00:34:32.007 "req_id": 1 00:34:32.007 } 00:34:32.007 Got JSON-RPC error response 00:34:32.007 response: 00:34:32.007 { 00:34:32.007 "code": -5, 00:34:32.007 "message": "Input/output error" 00:34:32.007 } 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 request: 00:34:32.007 { 00:34:32.007 "name": "nvme0", 00:34:32.007 "trtype": "rdma", 00:34:32.007 "traddr": "192.168.100.8", 00:34:32.007 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:32.007 "adrfam": "ipv4", 00:34:32.007 "trsvcid": "4420", 00:34:32.007 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:32.007 "dhchap_key": "key2", 00:34:32.007 "method": "bdev_nvme_attach_controller", 00:34:32.007 "req_id": 1 00:34:32.007 } 00:34:32.007 Got JSON-RPC error response 00:34:32.007 response: 00:34:32.007 { 00:34:32.007 "code": -5, 00:34:32.007 "message": "Input/output error" 00:34:32.007 } 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:32.007 11:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:32.007 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:32.007 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:32.008 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:32.008 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:32.008 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:32.008 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.265 request: 00:34:32.265 { 00:34:32.265 "name": "nvme0", 00:34:32.265 "trtype": "rdma", 00:34:32.265 "traddr": "192.168.100.8", 00:34:32.265 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:32.265 "adrfam": "ipv4", 00:34:32.265 "trsvcid": "4420", 00:34:32.265 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:32.265 "dhchap_key": "key1", 00:34:32.265 "dhchap_ctrlr_key": "ckey2", 00:34:32.265 "method": "bdev_nvme_attach_controller", 00:34:32.265 "req_id": 1 00:34:32.265 } 00:34:32.265 Got JSON-RPC error response 00:34:32.265 response: 00:34:32.265 { 00:34:32.265 "code": -5, 00:34:32.265 "message": "Input/output error" 00:34:32.265 } 00:34:32.265 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:32.265 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:34:32.265 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:32.265 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:32.266 rmmod nvme_rdma 00:34:32.266 rmmod nvme_fabrics 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 93152 ']' 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 93152 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 93152 ']' 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 93152 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 93152 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 93152' 00:34:32.266 killing process with pid 93152 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 93152 00:34:32.266 11:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 93152 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:34:32.524 11:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:34:35.050 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:35.050 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:35.308 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:35.308 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:35.308 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:35.308 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:35.308 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:35.308 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:35.875 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:36.134 11:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.LzJ /tmp/spdk.key-null.Lcs /tmp/spdk.key-sha256.cHW /tmp/spdk.key-sha384.ffY /tmp/spdk.key-sha512.1D2 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log 00:34:36.134 11:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:34:38.665 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:38.665 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:38.665 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:38.665 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:38.665 00:34:38.665 real 0m48.619s 00:34:38.665 user 0m44.662s 00:34:38.665 sys 0m12.697s 00:34:38.665 11:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:38.665 11:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.665 ************************************ 00:34:38.665 END TEST nvmf_auth_host 00:34:38.665 ************************************ 00:34:38.923 11:02:07 nvmf_rdma -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:34:38.923 11:02:07 nvmf_rdma -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:34:38.923 11:02:07 nvmf_rdma -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:34:38.923 11:02:07 nvmf_rdma -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:34:38.923 11:02:07 nvmf_rdma -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:34:38.923 11:02:07 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:38.923 11:02:07 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:38.923 11:02:07 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:38.923 ************************************ 00:34:38.923 START TEST nvmf_bdevperf 00:34:38.923 ************************************ 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:34:38.923 * Looking for test storage... 00:34:38.923 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.923 11:02:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:38.924 11:02:07 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:45.487 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:45.487 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:45.487 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@377 -- # modinfo irdma 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:45.488 Found net devices under 0000:af:00.0: cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:45.488 Found net devices under 0000:af:00.1: cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:34:45.488 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:34:45.488 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:34:45.488 altname enp175s0f0np0 00:34:45.488 altname ens801f0np0 00:34:45.488 inet 192.168.100.8/24 scope global cvl_0_0 00:34:45.488 valid_lft forever preferred_lft forever 00:34:45.488 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:34:45.488 valid_lft forever preferred_lft forever 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:34:45.488 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:34:45.488 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:34:45.488 altname enp175s0f1np1 00:34:45.488 altname ens801f1np1 00:34:45.488 inet 192.168.100.9/24 scope global cvl_0_1 00:34:45.488 valid_lft forever preferred_lft forever 00:34:45.488 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:34:45.488 valid_lft forever preferred_lft forever 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:45.488 192.168.100.9' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:45.488 192.168.100.9' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:45.488 192.168.100.9' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=106456 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 106456 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 106456 ']' 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:45.488 11:02:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.488 [2024-06-10 11:02:13.593167] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:45.488 [2024-06-10 11:02:13.593209] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.488 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.488 [2024-06-10 11:02:13.653777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:45.488 [2024-06-10 11:02:13.732140] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.488 [2024-06-10 11:02:13.732175] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.488 [2024-06-10 11:02:13.732182] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.488 [2024-06-10 11:02:13.732188] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.488 [2024-06-10 11:02:13.732193] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.488 [2024-06-10 11:02:13.732244] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:45.488 [2024-06-10 11:02:13.732366] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:45.488 [2024-06-10 11:02:13.732368] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.488 [2024-06-10 11:02:14.455256] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1d2c0d0/0x1d2b710) succeed. 00:34:45.488 [2024-06-10 11:02:14.463869] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1d2d400/0x1d2bc90) succeed. 00:34:45.488 [2024-06-10 11:02:14.463890] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.488 Malloc0 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:45.488 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:45.815 [2024-06-10 11:02:14.518178] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:45.815 11:02:14 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:45.815 11:02:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:45.815 11:02:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:45.815 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:45.815 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:45.815 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:45.815 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:45.815 { 00:34:45.815 "params": { 00:34:45.815 "name": "Nvme$subsystem", 00:34:45.815 "trtype": "$TEST_TRANSPORT", 00:34:45.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.816 "adrfam": "ipv4", 00:34:45.816 "trsvcid": "$NVMF_PORT", 00:34:45.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.816 "hdgst": ${hdgst:-false}, 00:34:45.816 "ddgst": ${ddgst:-false} 00:34:45.816 }, 00:34:45.816 "method": "bdev_nvme_attach_controller" 00:34:45.816 } 00:34:45.816 EOF 00:34:45.816 )") 00:34:45.816 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:45.816 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:45.816 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:45.816 11:02:14 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:45.816 "params": { 00:34:45.816 "name": "Nvme1", 00:34:45.816 "trtype": "rdma", 00:34:45.816 "traddr": "192.168.100.8", 00:34:45.816 "adrfam": "ipv4", 00:34:45.816 "trsvcid": "4420", 00:34:45.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:45.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:45.816 "hdgst": false, 00:34:45.816 "ddgst": false 00:34:45.816 }, 00:34:45.816 "method": "bdev_nvme_attach_controller" 00:34:45.816 }' 00:34:45.816 [2024-06-10 11:02:14.565828] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:45.816 [2024-06-10 11:02:14.565871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106698 ] 00:34:45.816 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.816 [2024-06-10 11:02:14.624760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.816 [2024-06-10 11:02:14.696922] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.075 Running I/O for 1 seconds... 00:34:47.012 00:34:47.012 Latency(us) 00:34:47.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.012 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:47.012 Verification LBA range: start 0x0 length 0x4000 00:34:47.012 Nvme1n1 : 1.00 18234.22 71.23 0.00 0.00 6981.13 2481.01 12670.29 00:34:47.012 =================================================================================================================== 00:34:47.012 Total : 18234.22 71.23 0.00 0.00 6981.13 2481.01 12670.29 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=106935 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:47.271 { 00:34:47.271 "params": { 00:34:47.271 "name": "Nvme$subsystem", 00:34:47.271 "trtype": "$TEST_TRANSPORT", 00:34:47.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.271 "adrfam": "ipv4", 00:34:47.271 "trsvcid": "$NVMF_PORT", 00:34:47.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.271 "hdgst": ${hdgst:-false}, 00:34:47.271 "ddgst": ${ddgst:-false} 00:34:47.271 }, 00:34:47.271 "method": "bdev_nvme_attach_controller" 00:34:47.271 } 00:34:47.271 EOF 00:34:47.271 )") 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:47.271 11:02:16 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:47.271 "params": { 00:34:47.271 "name": "Nvme1", 00:34:47.271 "trtype": "rdma", 00:34:47.271 "traddr": "192.168.100.8", 00:34:47.271 "adrfam": "ipv4", 00:34:47.271 "trsvcid": "4420", 00:34:47.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:47.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:47.271 "hdgst": false, 00:34:47.271 "ddgst": false 00:34:47.271 }, 00:34:47.271 "method": "bdev_nvme_attach_controller" 00:34:47.271 }' 00:34:47.271 [2024-06-10 11:02:16.114913] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:47.271 [2024-06-10 11:02:16.114962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106935 ] 00:34:47.271 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.271 [2024-06-10 11:02:16.175246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.271 [2024-06-10 11:02:16.243231] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.530 Running I/O for 15 seconds... 00:34:50.061 11:02:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 106456 00:34:50.061 11:02:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:50.628 [2024-06-10 11:02:19.647978] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:50.628 [2024-06-10 11:02:19.648022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.628 [2024-06-10 11:02:19.648426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.628 [2024-06-10 11:02:19.648434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.648987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.648993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.649001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.649009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.629 [2024-06-10 11:02:19.649017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.629 [2024-06-10 11:02:19.649023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:50.630 [2024-06-10 11:02:19.649410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.630 [2024-06-10 11:02:19.649610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0xab9a2549 00:34:50.630 [2024-06-10 11:02:19.649617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.649984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.649993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.650002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.650011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0xab9a2549 00:34:50.631 [2024-06-10 11:02:19.650017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1d86100 sqhd:4980 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.650348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:50.631 [2024-06-10 11:02:19.650359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:50.631 [2024-06-10 11:02:19.650365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129328 len:8 PRP1 0x0 PRP2 0x0 00:34:50.631 [2024-06-10 11:02:19.650372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.631 [2024-06-10 11:02:19.650408] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:34:50.631 [2024-06-10 11:02:19.653529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:50.631 [2024-06-10 11:02:19.653567] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:50.890 [2024-06-10 11:02:19.670171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:50.890 [2024-06-10 11:02:19.672963] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:50.890 [2024-06-10 11:02:19.672982] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:50.890 [2024-06-10 11:02:19.672988] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:34:51.826 [2024-06-10 11:02:20.675760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:51.826 [2024-06-10 11:02:20.675784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:51.826 [2024-06-10 11:02:20.675946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:51.826 [2024-06-10 11:02:20.675960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:51.826 [2024-06-10 11:02:20.675968] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:51.826 [2024-06-10 11:02:20.678589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:51.826 [2024-06-10 11:02:20.684492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:51.826 [2024-06-10 11:02:20.687316] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:51.826 [2024-06-10 11:02:20.687334] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:51.826 [2024-06-10 11:02:20.687339] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:34:52.762 [2024-06-10 11:02:21.690153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:52.762 [2024-06-10 11:02:21.690174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:52.762 [2024-06-10 11:02:21.690335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:52.762 [2024-06-10 11:02:21.690344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:52.762 [2024-06-10 11:02:21.690354] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:52.762 [2024-06-10 11:02:21.692876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:52.762 [2024-06-10 11:02:21.696307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:52.762 [2024-06-10 11:02:21.699378] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:52.762 [2024-06-10 11:02:21.699396] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:52.762 [2024-06-10 11:02:21.699402] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:34:53.328 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 106456 Killed "${NVMF_APP[@]}" "$@" 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=107853 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 107853 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 107853 ']' 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:53.328 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:53.328 [2024-06-10 11:02:22.134624] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:34:53.328 [2024-06-10 11:02:22.134665] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.328 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.328 [2024-06-10 11:02:22.195256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:53.328 [2024-06-10 11:02:22.273009] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.328 [2024-06-10 11:02:22.273046] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.328 [2024-06-10 11:02:22.273053] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.328 [2024-06-10 11:02:22.273059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.328 [2024-06-10 11:02:22.273064] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.328 [2024-06-10 11:02:22.273101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.328 [2024-06-10 11:02:22.273185] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:53.328 [2024-06-10 11:02:22.273186] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.894 [2024-06-10 11:02:22.702340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:53.894 [2024-06-10 11:02:22.702379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:53.894 [2024-06-10 11:02:22.702558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:53.894 [2024-06-10 11:02:22.702570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:53.894 [2024-06-10 11:02:22.702577] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:53.894 [2024-06-10 11:02:22.705333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:53.894 [2024-06-10 11:02:22.711816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:53.894 [2024-06-10 11:02:22.714742] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:53.894 [2024-06-10 11:02:22.714762] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:53.894 [2024-06-10 11:02:22.714768] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:54.153 11:02:22 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:54.153 [2024-06-10 11:02:23.000587] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xd250d0/0xd24710) succeed. 00:34:54.153 [2024-06-10 11:02:23.009418] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xd26400/0xd24c90) succeed. 00:34:54.153 [2024-06-10 11:02:23.009439] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:54.153 Malloc0 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:54.153 [2024-06-10 11:02:23.070282] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:54.153 11:02:23 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 106935 00:34:54.720 [2024-06-10 11:02:23.717651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:54.720 [2024-06-10 11:02:23.717677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:54.720 [2024-06-10 11:02:23.717851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:54.720 [2024-06-10 11:02:23.717861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:54.720 [2024-06-10 11:02:23.717869] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:54.720 [2024-06-10 11:02:23.720640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:54.720 [2024-06-10 11:02:23.729572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:54.979 [2024-06-10 11:02:23.778163] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:03.097 00:35:03.097 Latency(us) 00:35:03.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.097 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:03.097 Verification LBA range: start 0x0 length 0x4000 00:35:03.097 Nvme1n1 : 15.01 12690.39 49.57 13723.51 0.00 4825.93 477.87 575218.83 00:35:03.097 =================================================================================================================== 00:35:03.097 Total : 12690.39 49.57 13723.51 0.00 4825.93 477.87 575218.83 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:03.097 rmmod nvme_rdma 00:35:03.097 rmmod nvme_fabrics 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 107853 ']' 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 107853 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 107853 ']' 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 107853 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 107853 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 107853' 00:35:03.097 killing process with pid 107853 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 107853 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 107853 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:03.097 00:35:03.097 real 0m24.220s 00:35:03.097 user 1m3.905s 00:35:03.097 sys 0m5.208s 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:03.097 11:02:31 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:03.097 ************************************ 00:35:03.097 END TEST nvmf_bdevperf 00:35:03.097 ************************************ 00:35:03.097 11:02:32 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:35:03.097 11:02:32 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:03.097 11:02:32 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:03.097 11:02:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:03.097 ************************************ 00:35:03.097 START TEST nvmf_target_disconnect 00:35:03.097 ************************************ 00:35:03.097 11:02:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:35:03.097 * Looking for test storage... 00:35:03.355 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.355 11:02:32 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/app/fio/nvme 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:03.356 11:02:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:09.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:09.927 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@377 -- # modinfo irdma 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:09.927 Found net devices under 0000:af:00.0: cvl_0_0 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:09.927 Found net devices under 0000:af:00.1: cvl_0_1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_0 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:35:09.927 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:35:09.927 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:35:09.927 altname enp175s0f0np0 00:35:09.927 altname ens801f0np0 00:35:09.927 inet 192.168.100.8/24 scope global cvl_0_0 00:35:09.927 valid_lft forever preferred_lft forever 00:35:09.927 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:35:09.927 valid_lft forever preferred_lft forever 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:35:09.927 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:35:09.927 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:35:09.927 altname enp175s0f1np1 00:35:09.927 altname ens801f1np1 00:35:09.927 inet 192.168.100.9/24 scope global cvl_0_1 00:35:09.927 valid_lft forever preferred_lft forever 00:35:09.927 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:35:09.927 valid_lft forever preferred_lft forever 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_0 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_1 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:35:09.927 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:09.928 192.168.100.9' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:09.928 192.168.100.9' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:09.928 192.168.100.9' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:09.928 11:02:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:09.928 ************************************ 00:35:09.928 START TEST nvmf_target_disconnect_tc1 00:35:09.928 ************************************ 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect ]] 00:35:09.928 11:02:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:09.928 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.928 [2024-06-10 11:02:38.111237] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:09.928 [2024-06-10 11:02:38.111279] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:09.928 [2024-06-10 11:02:38.111291] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:35:10.213 [2024-06-10 11:02:39.114085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:35:10.213 [2024-06-10 11:02:39.114113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:10.213 [2024-06-10 11:02:39.114122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:35:10.213 [2024-06-10 11:02:39.114148] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:10.213 [2024-06-10 11:02:39.114155] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:35:10.213 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:35:10.213 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:10.213 Initializing NVMe Controllers 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:10.213 00:35:10.213 real 0m1.112s 00:35:10.213 user 0m0.947s 00:35:10.213 sys 0m0.160s 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:10.213 ************************************ 00:35:10.213 END TEST nvmf_target_disconnect_tc1 00:35:10.213 ************************************ 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:10.213 ************************************ 00:35:10.213 START TEST nvmf_target_disconnect_tc2 00:35:10.213 ************************************ 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=113015 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 113015 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 113015 ']' 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:10.213 11:02:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:10.470 [2024-06-10 11:02:39.244147] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:10.470 [2024-06-10 11:02:39.244184] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.470 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.470 [2024-06-10 11:02:39.317216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:10.470 [2024-06-10 11:02:39.390737] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.470 [2024-06-10 11:02:39.390778] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.470 [2024-06-10 11:02:39.390785] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.470 [2024-06-10 11:02:39.390790] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.470 [2024-06-10 11:02:39.390796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.470 [2024-06-10 11:02:39.390910] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:35:10.470 [2024-06-10 11:02:39.391004] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:35:10.470 [2024-06-10 11:02:39.391104] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:35:10.470 [2024-06-10 11:02:39.391106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:35:11.035 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:11.035 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:35:11.035 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:11.035 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:11.035 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.291 Malloc0 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.291 [2024-06-10 11:02:40.124116] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1243990/0x1242fd0) succeed. 00:35:11.291 [2024-06-10 11:02:40.133231] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1244d40/0x1243550) succeed. 00:35:11.291 [2024-06-10 11:02:40.133252] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.291 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.291 [2024-06-10 11:02:40.161531] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:11.292 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.292 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:35:11.292 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.292 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:11.292 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.292 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=113259 00:35:11.292 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:11.292 11:02:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:11.292 EAL: No free 2048 kB hugepages reported on node 1 00:35:13.190 11:02:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 113015 00:35:13.190 11:02:42 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:14.129 [2024-06-10 11:02:42.879971] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Write completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 Read completed with error (sct=0, sc=8) 00:35:14.129 starting I/O failed 00:35:14.129 [2024-06-10 11:02:42.880577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:14.129 [2024-06-10 11:02:42.882464] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:14.129 [2024-06-10 11:02:42.882481] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:14.129 [2024-06-10 11:02:42.882487] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:15.067 [2024-06-10 11:02:43.885166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:15.067 qpair failed and we were unable to recover it. 00:35:15.067 [2024-06-10 11:02:43.886981] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:15.067 [2024-06-10 11:02:43.886997] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:15.067 [2024-06-10 11:02:43.887003] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:15.326 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 113015 Killed "${NVMF_APP[@]}" "$@" 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=113934 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 113934 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 113934 ']' 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:15.326 11:02:44 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:15.326 [2024-06-10 11:02:44.233612] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:15.326 [2024-06-10 11:02:44.233654] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.326 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.326 [2024-06-10 11:02:44.305268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:15.585 [2024-06-10 11:02:44.380807] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.585 [2024-06-10 11:02:44.380845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.585 [2024-06-10 11:02:44.380852] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.585 [2024-06-10 11:02:44.380859] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.585 [2024-06-10 11:02:44.380865] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.585 [2024-06-10 11:02:44.380998] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:35:15.585 [2024-06-10 11:02:44.381103] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:35:15.585 [2024-06-10 11:02:44.381214] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:35:15.585 [2024-06-10 11:02:44.381215] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:35:16.153 [2024-06-10 11:02:44.889724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:16.153 qpair failed and we were unable to recover it. 00:35:16.153 [2024-06-10 11:02:44.891561] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:16.153 [2024-06-10 11:02:44.891578] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:16.153 [2024-06-10 11:02:44.891585] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.153 Malloc0 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.153 [2024-06-10 11:02:45.107871] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x81c990/0x81bfd0) succeed. 00:35:16.153 [2024-06-10 11:02:45.116992] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x81dd40/0x81c550) succeed. 00:35:16.153 [2024-06-10 11:02:45.117012] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.153 [2024-06-10 11:02:45.145296] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.153 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:16.154 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.154 11:02:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 113259 00:35:17.092 [2024-06-10 11:02:45.894318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.092 qpair failed and we were unable to recover it. 00:35:17.092 [2024-06-10 11:02:45.902646] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.092 [2024-06-10 11:02:45.902706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.092 [2024-06-10 11:02:45.902724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.092 [2024-06-10 11:02:45.902731] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.092 [2024-06-10 11:02:45.902738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.092 [2024-06-10 11:02:45.910940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.092 qpair failed and we were unable to recover it. 00:35:17.092 [2024-06-10 11:02:45.922667] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.092 [2024-06-10 11:02:45.922713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.092 [2024-06-10 11:02:45.922728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.092 [2024-06-10 11:02:45.922735] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.092 [2024-06-10 11:02:45.922741] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.092 [2024-06-10 11:02:45.930976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.092 qpair failed and we were unable to recover it. 00:35:17.092 [2024-06-10 11:02:45.942714] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.092 [2024-06-10 11:02:45.942761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.092 [2024-06-10 11:02:45.942776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.092 [2024-06-10 11:02:45.942782] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.092 [2024-06-10 11:02:45.942789] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.092 [2024-06-10 11:02:45.951138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.092 qpair failed and we were unable to recover it. 00:35:17.092 [2024-06-10 11:02:45.962760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.092 [2024-06-10 11:02:45.962805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.092 [2024-06-10 11:02:45.962820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.092 [2024-06-10 11:02:45.962827] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.092 [2024-06-10 11:02:45.962833] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.092 [2024-06-10 11:02:45.971119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.092 qpair failed and we were unable to recover it. 00:35:17.092 [2024-06-10 11:02:45.982774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.092 [2024-06-10 11:02:45.982818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.092 [2024-06-10 11:02:45.982832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.092 [2024-06-10 11:02:45.982839] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.092 [2024-06-10 11:02:45.982845] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.092 [2024-06-10 11:02:45.991210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.092 qpair failed and we were unable to recover it. 00:35:17.092 [2024-06-10 11:02:46.002838] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.092 [2024-06-10 11:02:46.002881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.092 [2024-06-10 11:02:46.002895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.092 [2024-06-10 11:02:46.002902] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.092 [2024-06-10 11:02:46.002909] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.092 [2024-06-10 11:02:46.011198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.092 qpair failed and we were unable to recover it. 00:35:17.092 [2024-06-10 11:02:46.022886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.092 [2024-06-10 11:02:46.022925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.092 [2024-06-10 11:02:46.022939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.092 [2024-06-10 11:02:46.022946] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.092 [2024-06-10 11:02:46.022952] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.092 [2024-06-10 11:02:46.031306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.092 qpair failed and we were unable to recover it. 00:35:17.092 [2024-06-10 11:02:46.042984] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.092 [2024-06-10 11:02:46.043027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.092 [2024-06-10 11:02:46.043041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.092 [2024-06-10 11:02:46.043048] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.092 [2024-06-10 11:02:46.043054] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.092 [2024-06-10 11:02:46.051300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.093 qpair failed and we were unable to recover it. 00:35:17.093 [2024-06-10 11:02:46.062974] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.093 [2024-06-10 11:02:46.063017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.093 [2024-06-10 11:02:46.063031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.093 [2024-06-10 11:02:46.063041] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.093 [2024-06-10 11:02:46.063047] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.093 [2024-06-10 11:02:46.071422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.093 qpair failed and we were unable to recover it. 00:35:17.093 [2024-06-10 11:02:46.083045] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.093 [2024-06-10 11:02:46.083084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.093 [2024-06-10 11:02:46.083098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.093 [2024-06-10 11:02:46.083105] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.093 [2024-06-10 11:02:46.083111] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.093 [2024-06-10 11:02:46.091490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.093 qpair failed and we were unable to recover it. 00:35:17.093 [2024-06-10 11:02:46.103090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.093 [2024-06-10 11:02:46.103139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.093 [2024-06-10 11:02:46.103152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.093 [2024-06-10 11:02:46.103159] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.093 [2024-06-10 11:02:46.103165] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.093 [2024-06-10 11:02:46.111518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.093 qpair failed and we were unable to recover it. 00:35:17.352 [2024-06-10 11:02:46.122958] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.352 [2024-06-10 11:02:46.123004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.352 [2024-06-10 11:02:46.123018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.352 [2024-06-10 11:02:46.123025] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.352 [2024-06-10 11:02:46.123032] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.352 [2024-06-10 11:02:46.131574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.352 qpair failed and we were unable to recover it. 00:35:17.352 [2024-06-10 11:02:46.143230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.352 [2024-06-10 11:02:46.143276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.352 [2024-06-10 11:02:46.143290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.352 [2024-06-10 11:02:46.143297] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.352 [2024-06-10 11:02:46.143303] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.352 [2024-06-10 11:02:46.151620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.352 qpair failed and we were unable to recover it. 00:35:17.352 [2024-06-10 11:02:46.163236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.352 [2024-06-10 11:02:46.163275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.352 [2024-06-10 11:02:46.163289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.352 [2024-06-10 11:02:46.163296] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.352 [2024-06-10 11:02:46.163302] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.171667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.183335] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.183379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.183392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.183399] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.183406] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.191721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.203411] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.203455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.203470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.203477] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.203483] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.211848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.223443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.223490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.223504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.223511] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.223517] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.231839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.243466] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.243503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.243519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.243526] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.243532] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.251905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.263575] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.263611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.263624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.263631] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.263637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.271986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.283667] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.283710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.283724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.283731] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.283737] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.292093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.303693] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.303735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.303748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.303755] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.303761] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.312171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.323751] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.323791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.323804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.323811] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.323818] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.332172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.343829] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.343870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.343884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.343891] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.343898] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.352216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.353 [2024-06-10 11:02:46.363883] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.353 [2024-06-10 11:02:46.363925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.353 [2024-06-10 11:02:46.363938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.353 [2024-06-10 11:02:46.363945] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.353 [2024-06-10 11:02:46.363951] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.353 [2024-06-10 11:02:46.372321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.353 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.383985] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.384036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.384050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.384057] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.384064] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.392401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.403999] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.404045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.404059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.404066] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.404072] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.412399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.424056] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.424103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.424116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.424123] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.424130] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.432467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.444149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.444193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.444206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.444213] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.444219] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.452580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.464238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.464279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.464293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.464300] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.464306] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.472637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.484266] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.484308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.484321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.484329] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.484335] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.492677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.504327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.504369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.504382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.504392] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.504399] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.512747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.524340] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.524384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.524398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.524405] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.524411] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.532789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.544279] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.544320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.544333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.613 [2024-06-10 11:02:46.544340] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.613 [2024-06-10 11:02:46.544347] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.613 [2024-06-10 11:02:46.552780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.613 qpair failed and we were unable to recover it. 00:35:17.613 [2024-06-10 11:02:46.564279] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.613 [2024-06-10 11:02:46.564322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.613 [2024-06-10 11:02:46.564335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.614 [2024-06-10 11:02:46.564342] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.614 [2024-06-10 11:02:46.564348] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.614 [2024-06-10 11:02:46.572865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.614 qpair failed and we were unable to recover it. 00:35:17.614 [2024-06-10 11:02:46.584378] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.614 [2024-06-10 11:02:46.584416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.614 [2024-06-10 11:02:46.584430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.614 [2024-06-10 11:02:46.584437] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.614 [2024-06-10 11:02:46.584443] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.614 [2024-06-10 11:02:46.592872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.614 qpair failed and we were unable to recover it. 00:35:17.614 [2024-06-10 11:02:46.604468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.614 [2024-06-10 11:02:46.604512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.614 [2024-06-10 11:02:46.604526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.614 [2024-06-10 11:02:46.604532] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.614 [2024-06-10 11:02:46.604538] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.614 [2024-06-10 11:02:46.613014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.614 qpair failed and we were unable to recover it. 00:35:17.614 [2024-06-10 11:02:46.624659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.614 [2024-06-10 11:02:46.624704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.614 [2024-06-10 11:02:46.624718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.614 [2024-06-10 11:02:46.624725] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.614 [2024-06-10 11:02:46.624731] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.614 [2024-06-10 11:02:46.633044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.614 qpair failed and we were unable to recover it. 00:35:17.873 [2024-06-10 11:02:46.644500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.873 [2024-06-10 11:02:46.644545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.873 [2024-06-10 11:02:46.644559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.873 [2024-06-10 11:02:46.644567] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.873 [2024-06-10 11:02:46.644573] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.873 [2024-06-10 11:02:46.653120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.873 qpair failed and we were unable to recover it. 00:35:17.873 [2024-06-10 11:02:46.664754] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.873 [2024-06-10 11:02:46.664801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.873 [2024-06-10 11:02:46.664816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.873 [2024-06-10 11:02:46.664823] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.873 [2024-06-10 11:02:46.664829] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.873 [2024-06-10 11:02:46.673184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.873 qpair failed and we were unable to recover it. 00:35:17.873 [2024-06-10 11:02:46.684845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.873 [2024-06-10 11:02:46.684890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.873 [2024-06-10 11:02:46.684906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.873 [2024-06-10 11:02:46.684913] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.873 [2024-06-10 11:02:46.684919] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.873 [2024-06-10 11:02:46.693238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.873 qpair failed and we were unable to recover it. 00:35:17.873 [2024-06-10 11:02:46.704897] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.873 [2024-06-10 11:02:46.704938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.704952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.704971] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.704977] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.713317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.724660] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.724701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.724716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.724722] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.724728] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.733346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.744746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.744787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.744800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.744807] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.744813] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.753393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.764804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.764848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.764861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.764869] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.764875] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.773495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.785054] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.785094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.785108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.785114] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.785121] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.793489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.805162] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.805201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.805215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.805222] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.805228] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.813560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.825200] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.825242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.825256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.825263] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.825269] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.833579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.845257] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.845300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.845314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.845321] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.845327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.853674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.865158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.865208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.865223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.865230] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.865236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.873815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:17.874 [2024-06-10 11:02:46.885196] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:17.874 [2024-06-10 11:02:46.885234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:17.874 [2024-06-10 11:02:46.885248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:17.874 [2024-06-10 11:02:46.885255] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:17.874 [2024-06-10 11:02:46.885261] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:17.874 [2024-06-10 11:02:46.893834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:17.874 qpair failed and we were unable to recover it. 00:35:18.134 [2024-06-10 11:02:46.905158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.134 [2024-06-10 11:02:46.905201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.134 [2024-06-10 11:02:46.905216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.134 [2024-06-10 11:02:46.905223] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.134 [2024-06-10 11:02:46.905229] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.134 [2024-06-10 11:02:46.913902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.134 qpair failed and we were unable to recover it. 00:35:18.134 [2024-06-10 11:02:46.925411] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.134 [2024-06-10 11:02:46.925457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.134 [2024-06-10 11:02:46.925471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.134 [2024-06-10 11:02:46.925478] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.134 [2024-06-10 11:02:46.925484] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.134 [2024-06-10 11:02:46.933937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.134 qpair failed and we were unable to recover it. 00:35:18.134 [2024-06-10 11:02:46.945481] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.134 [2024-06-10 11:02:46.945523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.134 [2024-06-10 11:02:46.945536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.134 [2024-06-10 11:02:46.945546] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.134 [2024-06-10 11:02:46.945552] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.134 [2024-06-10 11:02:46.953933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.134 qpair failed and we were unable to recover it. 00:35:18.134 [2024-06-10 11:02:46.965375] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.134 [2024-06-10 11:02:46.965414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.134 [2024-06-10 11:02:46.965428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.134 [2024-06-10 11:02:46.965435] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.134 [2024-06-10 11:02:46.965441] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.134 [2024-06-10 11:02:46.974008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.134 qpair failed and we were unable to recover it. 00:35:18.134 [2024-06-10 11:02:46.985641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.134 [2024-06-10 11:02:46.985683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.134 [2024-06-10 11:02:46.985697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.134 [2024-06-10 11:02:46.985703] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.134 [2024-06-10 11:02:46.985710] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.134 [2024-06-10 11:02:46.993992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.134 qpair failed and we were unable to recover it. 00:35:18.134 [2024-06-10 11:02:47.005554] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.134 [2024-06-10 11:02:47.005597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.134 [2024-06-10 11:02:47.005611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.134 [2024-06-10 11:02:47.005618] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.134 [2024-06-10 11:02:47.005624] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.134 [2024-06-10 11:02:47.014035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.134 qpair failed and we were unable to recover it. 00:35:18.134 [2024-06-10 11:02:47.025728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.134 [2024-06-10 11:02:47.025776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.134 [2024-06-10 11:02:47.025789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.134 [2024-06-10 11:02:47.025796] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.134 [2024-06-10 11:02:47.025803] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.134 [2024-06-10 11:02:47.034179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.135 qpair failed and we were unable to recover it. 00:35:18.135 [2024-06-10 11:02:47.045835] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.135 [2024-06-10 11:02:47.045879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.135 [2024-06-10 11:02:47.045893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.135 [2024-06-10 11:02:47.045900] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.135 [2024-06-10 11:02:47.045906] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.135 [2024-06-10 11:02:47.054259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.135 qpair failed and we were unable to recover it. 00:35:18.135 [2024-06-10 11:02:47.065869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.135 [2024-06-10 11:02:47.065913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.135 [2024-06-10 11:02:47.065927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.135 [2024-06-10 11:02:47.065934] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.135 [2024-06-10 11:02:47.065941] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.135 [2024-06-10 11:02:47.074383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.135 qpair failed and we were unable to recover it. 00:35:18.135 [2024-06-10 11:02:47.086028] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.135 [2024-06-10 11:02:47.086072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.135 [2024-06-10 11:02:47.086086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.135 [2024-06-10 11:02:47.086093] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.135 [2024-06-10 11:02:47.086099] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.135 [2024-06-10 11:02:47.094425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.135 qpair failed and we were unable to recover it. 00:35:18.135 [2024-06-10 11:02:47.106027] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.135 [2024-06-10 11:02:47.106073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.135 [2024-06-10 11:02:47.106087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.135 [2024-06-10 11:02:47.106094] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.135 [2024-06-10 11:02:47.106100] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.135 [2024-06-10 11:02:47.114505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.135 qpair failed and we were unable to recover it. 00:35:18.135 [2024-06-10 11:02:47.126186] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.135 [2024-06-10 11:02:47.126224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.135 [2024-06-10 11:02:47.126241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.135 [2024-06-10 11:02:47.126248] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.135 [2024-06-10 11:02:47.126254] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.135 [2024-06-10 11:02:47.134660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.135 qpair failed and we were unable to recover it. 00:35:18.135 [2024-06-10 11:02:47.146203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.135 [2024-06-10 11:02:47.146248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.135 [2024-06-10 11:02:47.146261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.135 [2024-06-10 11:02:47.146268] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.135 [2024-06-10 11:02:47.146275] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.135 [2024-06-10 11:02:47.154647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.135 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.166257] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.166302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.166316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.166323] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.166330] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.174740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.186345] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.186395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.186408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.186415] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.186421] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.194745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.206497] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.206536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.206550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.206557] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.206563] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.214798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.226521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.226562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.226576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.226583] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.226589] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.234916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.246544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.246587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.246600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.246607] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.246613] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.255002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.266664] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.266708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.266721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.266728] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.266735] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.275034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.286781] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.286818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.286832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.286839] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.286845] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.295078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.306603] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.306650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.306664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.306671] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.306677] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.315094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.326740] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.326783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.326796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.326804] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.326810] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.335224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.346856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.346902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.346915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.346922] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.346928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.355279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.367007] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.367048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.367061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.367068] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.367074] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.395 [2024-06-10 11:02:47.375330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.395 qpair failed and we were unable to recover it. 00:35:18.395 [2024-06-10 11:02:47.387080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.395 [2024-06-10 11:02:47.387116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.395 [2024-06-10 11:02:47.387130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.395 [2024-06-10 11:02:47.387140] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.395 [2024-06-10 11:02:47.387146] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.396 [2024-06-10 11:02:47.395382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.396 qpair failed and we were unable to recover it. 00:35:18.396 [2024-06-10 11:02:47.407192] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.396 [2024-06-10 11:02:47.407242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.396 [2024-06-10 11:02:47.407256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.396 [2024-06-10 11:02:47.407263] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.396 [2024-06-10 11:02:47.407269] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.396 [2024-06-10 11:02:47.415441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.396 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.427203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.427254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.427268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.427275] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.427281] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.435557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.447199] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.447243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.447256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.447263] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.447269] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.455609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.467354] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.467390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.467404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.467411] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.467417] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.475618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.487366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.487410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.487424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.487430] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.487436] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.495764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.507276] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.507316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.507330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.507337] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.507343] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.515790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.527374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.527418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.527431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.527438] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.527444] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.535811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.547439] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.547487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.547501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.547508] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.547515] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.555833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.567588] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.567630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.567647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.567654] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.567660] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.575939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.587541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.587583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.587597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.587604] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.587610] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.595924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.656 qpair failed and we were unable to recover it. 00:35:18.656 [2024-06-10 11:02:47.607598] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.656 [2024-06-10 11:02:47.607642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.656 [2024-06-10 11:02:47.607656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.656 [2024-06-10 11:02:47.607663] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.656 [2024-06-10 11:02:47.607669] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.656 [2024-06-10 11:02:47.616072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.657 qpair failed and we were unable to recover it. 00:35:18.657 [2024-06-10 11:02:47.627576] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.657 [2024-06-10 11:02:47.627613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.657 [2024-06-10 11:02:47.627627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.657 [2024-06-10 11:02:47.627634] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.657 [2024-06-10 11:02:47.627640] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.657 [2024-06-10 11:02:47.635910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.657 qpair failed and we were unable to recover it. 00:35:18.657 [2024-06-10 11:02:47.647629] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.657 [2024-06-10 11:02:47.647671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.657 [2024-06-10 11:02:47.647684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.657 [2024-06-10 11:02:47.647691] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.657 [2024-06-10 11:02:47.647697] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.657 [2024-06-10 11:02:47.656187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.657 qpair failed and we were unable to recover it. 00:35:18.657 [2024-06-10 11:02:47.667720] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.657 [2024-06-10 11:02:47.667760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.657 [2024-06-10 11:02:47.667774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.657 [2024-06-10 11:02:47.667781] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.657 [2024-06-10 11:02:47.667787] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.657 [2024-06-10 11:02:47.676188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.657 qpair failed and we were unable to recover it. 00:35:18.916 [2024-06-10 11:02:47.687825] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.916 [2024-06-10 11:02:47.687875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.916 [2024-06-10 11:02:47.687889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.916 [2024-06-10 11:02:47.687896] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.916 [2024-06-10 11:02:47.687902] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.916 [2024-06-10 11:02:47.696269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.916 qpair failed and we were unable to recover it. 00:35:18.916 [2024-06-10 11:02:47.707884] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.916 [2024-06-10 11:02:47.707924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.916 [2024-06-10 11:02:47.707938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.916 [2024-06-10 11:02:47.707945] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.916 [2024-06-10 11:02:47.707951] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.916 [2024-06-10 11:02:47.716289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.916 qpair failed and we were unable to recover it. 00:35:18.916 [2024-06-10 11:02:47.727946] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.916 [2024-06-10 11:02:47.727994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.728008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.728015] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.728021] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.736409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.747975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.748027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.748040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.748047] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.748053] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.756392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.768101] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.768138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.768152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.768159] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.768165] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.776484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.788131] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.788167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.788180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.788187] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.788194] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.796585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.808238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.808281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.808294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.808301] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.808307] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.816590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.828385] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.828428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.828442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.828452] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.828458] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.836714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.848446] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.848487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.848501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.848508] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.848514] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.856797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.868492] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.868531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.868545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.868552] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.868558] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.876784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.888511] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.888552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.888566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.888572] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.888579] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.896882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.908628] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.908678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.908691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.908698] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.908704] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.916968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:18.917 [2024-06-10 11:02:47.928672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:18.917 [2024-06-10 11:02:47.928716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:18.917 [2024-06-10 11:02:47.928730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:18.917 [2024-06-10 11:02:47.928737] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:18.917 [2024-06-10 11:02:47.928743] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:18.917 [2024-06-10 11:02:47.937014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:18.917 qpair failed and we were unable to recover it. 00:35:19.177 [2024-06-10 11:02:47.948558] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.177 [2024-06-10 11:02:47.948604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.177 [2024-06-10 11:02:47.948618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.177 [2024-06-10 11:02:47.948625] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.177 [2024-06-10 11:02:47.948632] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.177 [2024-06-10 11:02:47.957009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.177 qpair failed and we were unable to recover it. 00:35:19.177 [2024-06-10 11:02:47.968612] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.177 [2024-06-10 11:02:47.968656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.177 [2024-06-10 11:02:47.968670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.177 [2024-06-10 11:02:47.968677] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.177 [2024-06-10 11:02:47.968683] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.177 [2024-06-10 11:02:47.977102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.177 qpair failed and we were unable to recover it. 00:35:19.177 [2024-06-10 11:02:47.988674] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.177 [2024-06-10 11:02:47.988724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.177 [2024-06-10 11:02:47.988737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.177 [2024-06-10 11:02:47.988744] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.177 [2024-06-10 11:02:47.988750] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.177 [2024-06-10 11:02:47.997140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.177 qpair failed and we were unable to recover it. 00:35:19.177 [2024-06-10 11:02:48.008770] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.177 [2024-06-10 11:02:48.008813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.177 [2024-06-10 11:02:48.008830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.177 [2024-06-10 11:02:48.008837] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.177 [2024-06-10 11:02:48.008843] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.177 [2024-06-10 11:02:48.017205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.177 qpair failed and we were unable to recover it. 00:35:19.177 [2024-06-10 11:02:48.028838] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.177 [2024-06-10 11:02:48.028880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.177 [2024-06-10 11:02:48.028893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.177 [2024-06-10 11:02:48.028901] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.177 [2024-06-10 11:02:48.028907] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.177 [2024-06-10 11:02:48.037290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.177 qpair failed and we were unable to recover it. 00:35:19.177 [2024-06-10 11:02:48.048841] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.177 [2024-06-10 11:02:48.048882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.177 [2024-06-10 11:02:48.048895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.177 [2024-06-10 11:02:48.048902] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.177 [2024-06-10 11:02:48.048908] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.177 [2024-06-10 11:02:48.057394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.177 qpair failed and we were unable to recover it. 00:35:19.177 [2024-06-10 11:02:48.068896] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.177 [2024-06-10 11:02:48.068939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.177 [2024-06-10 11:02:48.068952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.177 [2024-06-10 11:02:48.068965] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.177 [2024-06-10 11:02:48.068971] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.177 [2024-06-10 11:02:48.077364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.177 qpair failed and we were unable to recover it. 00:35:19.177 [2024-06-10 11:02:48.088978] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.177 [2024-06-10 11:02:48.089020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.177 [2024-06-10 11:02:48.089034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.177 [2024-06-10 11:02:48.089041] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.177 [2024-06-10 11:02:48.089047] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.177 [2024-06-10 11:02:48.097464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.177 qpair failed and we were unable to recover it. 00:35:19.178 [2024-06-10 11:02:48.109096] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.178 [2024-06-10 11:02:48.109137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.178 [2024-06-10 11:02:48.109151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.178 [2024-06-10 11:02:48.109158] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.178 [2024-06-10 11:02:48.109164] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.178 [2024-06-10 11:02:48.117539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.178 qpair failed and we were unable to recover it. 00:35:19.178 [2024-06-10 11:02:48.129014] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.178 [2024-06-10 11:02:48.129060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.178 [2024-06-10 11:02:48.129074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.178 [2024-06-10 11:02:48.129081] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.178 [2024-06-10 11:02:48.129087] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.178 [2024-06-10 11:02:48.137539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.178 qpair failed and we were unable to recover it. 00:35:19.178 [2024-06-10 11:02:48.149190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.178 [2024-06-10 11:02:48.149230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.178 [2024-06-10 11:02:48.149244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.178 [2024-06-10 11:02:48.149251] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.178 [2024-06-10 11:02:48.149258] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.178 [2024-06-10 11:02:48.157635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.178 qpair failed and we were unable to recover it. 00:35:19.178 [2024-06-10 11:02:48.169256] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.178 [2024-06-10 11:02:48.169295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.178 [2024-06-10 11:02:48.169309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.178 [2024-06-10 11:02:48.169315] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.178 [2024-06-10 11:02:48.169321] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.178 [2024-06-10 11:02:48.177599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.178 qpair failed and we were unable to recover it. 00:35:19.178 [2024-06-10 11:02:48.189238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.178 [2024-06-10 11:02:48.189287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.178 [2024-06-10 11:02:48.189300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.178 [2024-06-10 11:02:48.189308] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.178 [2024-06-10 11:02:48.189314] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.178 [2024-06-10 11:02:48.197654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.178 qpair failed and we were unable to recover it. 00:35:19.436 [2024-06-10 11:02:48.209087] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.209133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.209148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.209154] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.209160] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.217688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.229196] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.229237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.229251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.229258] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.229264] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.237648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.249208] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.249250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.249264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.249271] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.249277] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.257780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.269419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.269456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.269469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.269480] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.269486] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.277854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.289323] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.289368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.289383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.289390] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.289396] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.297896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.309454] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.309498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.309512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.309519] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.309526] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.317968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.329528] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.329569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.329584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.329591] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.329597] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.338081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.349704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.349749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.349763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.349770] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.349776] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.358028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.369773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.369815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.369828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.369835] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.369841] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.378182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.389818] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.389861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.389875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.389882] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.389888] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.398300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.409780] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.409824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.409837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.409844] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.409850] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.418158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.429865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.429910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.429925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.429932] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.429939] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.438337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.437 [2024-06-10 11:02:48.450024] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.437 [2024-06-10 11:02:48.450067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.437 [2024-06-10 11:02:48.450086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.437 [2024-06-10 11:02:48.450093] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.437 [2024-06-10 11:02:48.450099] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.437 [2024-06-10 11:02:48.458374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.437 qpair failed and we were unable to recover it. 00:35:19.695 [2024-06-10 11:02:48.469997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.695 [2024-06-10 11:02:48.470045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.695 [2024-06-10 11:02:48.470059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.695 [2024-06-10 11:02:48.470065] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.695 [2024-06-10 11:02:48.470071] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.695 [2024-06-10 11:02:48.478436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.695 qpair failed and we were unable to recover it. 00:35:19.695 [2024-06-10 11:02:48.490053] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.695 [2024-06-10 11:02:48.490095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.695 [2024-06-10 11:02:48.490109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.695 [2024-06-10 11:02:48.490116] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.695 [2024-06-10 11:02:48.490122] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.498539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.510177] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.510213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.510227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.510234] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.510240] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.518643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.530265] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.530309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.530322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.530329] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.530335] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.538653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.550272] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.550314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.550327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.550334] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.550340] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.558671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.570359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.570412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.570426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.570433] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.570439] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.578763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.590393] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.590432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.590446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.590453] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.590459] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.598811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.610492] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.610532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.610546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.610552] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.610559] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.618924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.630478] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.630522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.630536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.630543] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.630549] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.639006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.650598] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.650642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.650656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.650663] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.650669] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.658993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.670592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.670630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.670644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.670651] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.670657] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.679141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.696 qpair failed and we were unable to recover it. 00:35:19.696 [2024-06-10 11:02:48.690698] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.696 [2024-06-10 11:02:48.690742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.696 [2024-06-10 11:02:48.690758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.696 [2024-06-10 11:02:48.690765] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.696 [2024-06-10 11:02:48.690771] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.696 [2024-06-10 11:02:48.699176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.697 qpair failed and we were unable to recover it. 00:35:19.697 [2024-06-10 11:02:48.710794] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.697 [2024-06-10 11:02:48.710842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.697 [2024-06-10 11:02:48.710855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.697 [2024-06-10 11:02:48.710865] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.697 [2024-06-10 11:02:48.710871] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.697 [2024-06-10 11:02:48.719199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.697 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.730801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.730849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.730863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.730869] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.730876] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.739274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.750831] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.750873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.750887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.750893] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.750900] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.759330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.770922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.770967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.770981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.770988] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.770994] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.779363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.791025] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.791071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.791084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.791091] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.791097] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.799436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.811045] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.811087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.811101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.811108] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.811114] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.819504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.831149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.831192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.831205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.831212] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.831218] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.839575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.851215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.851258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.851271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.851277] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.851284] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.859615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.871257] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.871298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.871312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.871319] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.871325] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.879749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.891335] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.891377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.891394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.891400] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.891406] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.899761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.911370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.911415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.955 [2024-06-10 11:02:48.911429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.955 [2024-06-10 11:02:48.911436] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.955 [2024-06-10 11:02:48.911443] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.955 [2024-06-10 11:02:48.919858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.955 qpair failed and we were unable to recover it. 00:35:19.955 [2024-06-10 11:02:48.931457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.955 [2024-06-10 11:02:48.931501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.956 [2024-06-10 11:02:48.931514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.956 [2024-06-10 11:02:48.931521] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.956 [2024-06-10 11:02:48.931527] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.956 [2024-06-10 11:02:48.939882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.956 qpair failed and we were unable to recover it. 00:35:19.956 [2024-06-10 11:02:48.951423] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.956 [2024-06-10 11:02:48.951472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.956 [2024-06-10 11:02:48.951486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.956 [2024-06-10 11:02:48.951493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.956 [2024-06-10 11:02:48.951499] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.956 [2024-06-10 11:02:48.959927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.956 qpair failed and we were unable to recover it. 00:35:19.956 [2024-06-10 11:02:48.971568] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:19.956 [2024-06-10 11:02:48.971611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:19.956 [2024-06-10 11:02:48.971624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:19.956 [2024-06-10 11:02:48.971631] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:19.956 [2024-06-10 11:02:48.971637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:19.956 [2024-06-10 11:02:48.980024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:19.956 qpair failed and we were unable to recover it. 00:35:20.214 [2024-06-10 11:02:48.991634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.214 [2024-06-10 11:02:48.991680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.214 [2024-06-10 11:02:48.991694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.214 [2024-06-10 11:02:48.991700] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.214 [2024-06-10 11:02:48.991706] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.214 [2024-06-10 11:02:49.000080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.214 qpair failed and we were unable to recover it. 00:35:20.214 [2024-06-10 11:02:49.011647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.214 [2024-06-10 11:02:49.011692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.214 [2024-06-10 11:02:49.011706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.214 [2024-06-10 11:02:49.011713] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.214 [2024-06-10 11:02:49.011719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.214 [2024-06-10 11:02:49.020186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.214 qpair failed and we were unable to recover it. 00:35:20.214 [2024-06-10 11:02:49.031761] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.214 [2024-06-10 11:02:49.031805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.214 [2024-06-10 11:02:49.031819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.214 [2024-06-10 11:02:49.031826] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.214 [2024-06-10 11:02:49.031832] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.214 [2024-06-10 11:02:49.040153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.214 qpair failed and we were unable to recover it. 00:35:20.214 [2024-06-10 11:02:49.051795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.214 [2024-06-10 11:02:49.051837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.214 [2024-06-10 11:02:49.051851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.214 [2024-06-10 11:02:49.051858] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.214 [2024-06-10 11:02:49.051864] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.214 [2024-06-10 11:02:49.060259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.214 qpair failed and we were unable to recover it. 00:35:20.214 [2024-06-10 11:02:49.071852] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.214 [2024-06-10 11:02:49.071895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.214 [2024-06-10 11:02:49.071909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.214 [2024-06-10 11:02:49.071916] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.214 [2024-06-10 11:02:49.071922] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.214 [2024-06-10 11:02:49.080323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.214 qpair failed and we were unable to recover it. 00:35:20.214 [2024-06-10 11:02:49.091948] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.215 [2024-06-10 11:02:49.091991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.215 [2024-06-10 11:02:49.092005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.215 [2024-06-10 11:02:49.092011] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.215 [2024-06-10 11:02:49.092018] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.215 [2024-06-10 11:02:49.100337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.215 qpair failed and we were unable to recover it. 00:35:20.215 [2024-06-10 11:02:49.111922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.215 [2024-06-10 11:02:49.111978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.215 [2024-06-10 11:02:49.111992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.215 [2024-06-10 11:02:49.111998] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.215 [2024-06-10 11:02:49.112005] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.215 [2024-06-10 11:02:49.120427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.215 qpair failed and we were unable to recover it. 00:35:20.215 [2024-06-10 11:02:49.132051] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.215 [2024-06-10 11:02:49.132093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.215 [2024-06-10 11:02:49.132107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.215 [2024-06-10 11:02:49.132114] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.215 [2024-06-10 11:02:49.132120] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.215 [2024-06-10 11:02:49.140514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.215 qpair failed and we were unable to recover it. 00:35:20.215 [2024-06-10 11:02:49.152077] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.215 [2024-06-10 11:02:49.152121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.215 [2024-06-10 11:02:49.152134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.215 [2024-06-10 11:02:49.152144] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.215 [2024-06-10 11:02:49.152150] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.215 [2024-06-10 11:02:49.160494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.215 qpair failed and we were unable to recover it. 00:35:20.215 [2024-06-10 11:02:49.172179] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.215 [2024-06-10 11:02:49.172225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.215 [2024-06-10 11:02:49.172239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.215 [2024-06-10 11:02:49.172246] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.215 [2024-06-10 11:02:49.172251] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.215 [2024-06-10 11:02:49.180594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.215 qpair failed and we were unable to recover it. 00:35:20.215 [2024-06-10 11:02:49.192197] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.215 [2024-06-10 11:02:49.192246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.215 [2024-06-10 11:02:49.192259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.215 [2024-06-10 11:02:49.192266] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.215 [2024-06-10 11:02:49.192272] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.215 [2024-06-10 11:02:49.200668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.215 qpair failed and we were unable to recover it. 00:35:20.215 [2024-06-10 11:02:49.212265] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.215 [2024-06-10 11:02:49.212310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.215 [2024-06-10 11:02:49.212323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.215 [2024-06-10 11:02:49.212330] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.215 [2024-06-10 11:02:49.212336] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.215 [2024-06-10 11:02:49.220692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.215 qpair failed and we were unable to recover it. 00:35:20.215 [2024-06-10 11:02:49.232328] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.215 [2024-06-10 11:02:49.232380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.215 [2024-06-10 11:02:49.232395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.215 [2024-06-10 11:02:49.232402] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.215 [2024-06-10 11:02:49.232409] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.215 [2024-06-10 11:02:49.240733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.215 qpair failed and we were unable to recover it. 00:35:20.473 [2024-06-10 11:02:49.252391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-06-10 11:02:49.252435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-06-10 11:02:49.252448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-06-10 11:02:49.252455] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-06-10 11:02:49.252462] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.473 [2024-06-10 11:02:49.260817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-06-10 11:02:49.272433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-06-10 11:02:49.272474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-06-10 11:02:49.272488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-06-10 11:02:49.272495] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-06-10 11:02:49.272501] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.473 [2024-06-10 11:02:49.280829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-06-10 11:02:49.292511] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-06-10 11:02:49.292554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-06-10 11:02:49.292567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-06-10 11:02:49.292574] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-06-10 11:02:49.292580] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.473 [2024-06-10 11:02:49.300962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-06-10 11:02:49.312511] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-06-10 11:02:49.312549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-06-10 11:02:49.312562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-06-10 11:02:49.312569] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-06-10 11:02:49.312575] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.473 [2024-06-10 11:02:49.320995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-06-10 11:02:49.332595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-06-10 11:02:49.332638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-06-10 11:02:49.332655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-06-10 11:02:49.332661] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-06-10 11:02:49.332668] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.473 [2024-06-10 11:02:49.341096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-06-10 11:02:49.352610] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-06-10 11:02:49.352652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-06-10 11:02:49.352666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-06-10 11:02:49.352673] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-06-10 11:02:49.352679] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.473 [2024-06-10 11:02:49.361160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-06-10 11:02:49.372699] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.473 [2024-06-10 11:02:49.372741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.473 [2024-06-10 11:02:49.372754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.473 [2024-06-10 11:02:49.372761] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.473 [2024-06-10 11:02:49.372767] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.473 [2024-06-10 11:02:49.381249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.473 qpair failed and we were unable to recover it. 00:35:20.473 [2024-06-10 11:02:49.392805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.474 [2024-06-10 11:02:49.392840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.474 [2024-06-10 11:02:49.392854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.474 [2024-06-10 11:02:49.392861] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.474 [2024-06-10 11:02:49.392867] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.474 [2024-06-10 11:02:49.401236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.474 qpair failed and we were unable to recover it. 00:35:20.474 [2024-06-10 11:02:49.412885] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.474 [2024-06-10 11:02:49.412929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.474 [2024-06-10 11:02:49.412943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.474 [2024-06-10 11:02:49.412950] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.474 [2024-06-10 11:02:49.412959] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.474 [2024-06-10 11:02:49.421300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.474 qpair failed and we were unable to recover it. 00:35:20.474 [2024-06-10 11:02:49.432926] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.474 [2024-06-10 11:02:49.432966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.474 [2024-06-10 11:02:49.432980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.474 [2024-06-10 11:02:49.432987] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.474 [2024-06-10 11:02:49.432993] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.474 [2024-06-10 11:02:49.441420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.474 qpair failed and we were unable to recover it. 00:35:20.474 [2024-06-10 11:02:49.452991] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.474 [2024-06-10 11:02:49.453043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.474 [2024-06-10 11:02:49.453057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.474 [2024-06-10 11:02:49.453063] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.474 [2024-06-10 11:02:49.453070] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.474 [2024-06-10 11:02:49.461469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.474 qpair failed and we were unable to recover it. 00:35:20.474 [2024-06-10 11:02:49.473025] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.474 [2024-06-10 11:02:49.473069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.474 [2024-06-10 11:02:49.473082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.474 [2024-06-10 11:02:49.473089] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.474 [2024-06-10 11:02:49.473095] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.474 [2024-06-10 11:02:49.481481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.474 qpair failed and we were unable to recover it. 00:35:20.474 [2024-06-10 11:02:49.493122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.474 [2024-06-10 11:02:49.493165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.474 [2024-06-10 11:02:49.493178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.474 [2024-06-10 11:02:49.493185] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.474 [2024-06-10 11:02:49.493191] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.474 [2024-06-10 11:02:49.501581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.474 qpair failed and we were unable to recover it. 00:35:20.731 [2024-06-10 11:02:49.513173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.731 [2024-06-10 11:02:49.513221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.731 [2024-06-10 11:02:49.513235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.731 [2024-06-10 11:02:49.513242] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.731 [2024-06-10 11:02:49.513248] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.731 [2024-06-10 11:02:49.521663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.731 qpair failed and we were unable to recover it. 00:35:20.731 [2024-06-10 11:02:49.533279] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.731 [2024-06-10 11:02:49.533321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.731 [2024-06-10 11:02:49.533335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.731 [2024-06-10 11:02:49.533343] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.731 [2024-06-10 11:02:49.533349] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.731 [2024-06-10 11:02:49.541597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.731 qpair failed and we were unable to recover it. 00:35:20.731 [2024-06-10 11:02:49.553282] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.731 [2024-06-10 11:02:49.553324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.731 [2024-06-10 11:02:49.553338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.731 [2024-06-10 11:02:49.553345] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.731 [2024-06-10 11:02:49.553351] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.731 [2024-06-10 11:02:49.561771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.731 qpair failed and we were unable to recover it. 00:35:20.731 [2024-06-10 11:02:49.573431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.731 [2024-06-10 11:02:49.573473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.731 [2024-06-10 11:02:49.573487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.731 [2024-06-10 11:02:49.573494] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.731 [2024-06-10 11:02:49.573500] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.731 [2024-06-10 11:02:49.581837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.731 qpair failed and we were unable to recover it. 00:35:20.731 [2024-06-10 11:02:49.593436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.731 [2024-06-10 11:02:49.593482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.731 [2024-06-10 11:02:49.593495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.731 [2024-06-10 11:02:49.593506] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.731 [2024-06-10 11:02:49.593512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.731 [2024-06-10 11:02:49.601889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.731 qpair failed and we were unable to recover it. 00:35:20.731 [2024-06-10 11:02:49.613502] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.731 [2024-06-10 11:02:49.613538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.731 [2024-06-10 11:02:49.613552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.731 [2024-06-10 11:02:49.613559] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.731 [2024-06-10 11:02:49.613565] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.731 [2024-06-10 11:02:49.621865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.732 qpair failed and we were unable to recover it. 00:35:20.732 [2024-06-10 11:02:49.633553] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.732 [2024-06-10 11:02:49.633590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.732 [2024-06-10 11:02:49.633603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.732 [2024-06-10 11:02:49.633610] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.732 [2024-06-10 11:02:49.633616] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.732 [2024-06-10 11:02:49.642070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.732 qpair failed and we were unable to recover it. 00:35:20.732 [2024-06-10 11:02:49.653647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.732 [2024-06-10 11:02:49.653692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.732 [2024-06-10 11:02:49.653706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.732 [2024-06-10 11:02:49.653713] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.732 [2024-06-10 11:02:49.653719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.732 [2024-06-10 11:02:49.662100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.732 qpair failed and we were unable to recover it. 00:35:20.732 [2024-06-10 11:02:49.673654] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.732 [2024-06-10 11:02:49.673694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.732 [2024-06-10 11:02:49.673707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.732 [2024-06-10 11:02:49.673714] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.732 [2024-06-10 11:02:49.673720] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.732 [2024-06-10 11:02:49.682115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.732 qpair failed and we were unable to recover it. 00:35:20.732 [2024-06-10 11:02:49.693702] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.732 [2024-06-10 11:02:49.693747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.732 [2024-06-10 11:02:49.693760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.732 [2024-06-10 11:02:49.693767] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.732 [2024-06-10 11:02:49.693774] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.732 [2024-06-10 11:02:49.702204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.732 qpair failed and we were unable to recover it. 00:35:20.732 [2024-06-10 11:02:49.713654] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.732 [2024-06-10 11:02:49.713695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.732 [2024-06-10 11:02:49.713709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.732 [2024-06-10 11:02:49.713716] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.732 [2024-06-10 11:02:49.713723] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.732 [2024-06-10 11:02:49.722219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.732 qpair failed and we were unable to recover it. 00:35:20.732 [2024-06-10 11:02:49.733801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.732 [2024-06-10 11:02:49.733846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.732 [2024-06-10 11:02:49.733860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.732 [2024-06-10 11:02:49.733867] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.732 [2024-06-10 11:02:49.733873] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:20.732 [2024-06-10 11:02:49.742236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:20.732 qpair failed and we were unable to recover it. 00:35:20.732 [2024-06-10 11:02:49.753775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:20.732 [2024-06-10 11:02:49.753824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:20.732 [2024-06-10 11:02:49.753837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:20.732 [2024-06-10 11:02:49.753845] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:20.732 [2024-06-10 11:02:49.753851] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.762351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.773980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.774021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.774038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.223 [2024-06-10 11:02:49.774045] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.223 [2024-06-10 11:02:49.774052] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.782401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.793937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.793984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.793999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.223 [2024-06-10 11:02:49.794006] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.223 [2024-06-10 11:02:49.794012] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.802505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.814067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.814112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.814126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.223 [2024-06-10 11:02:49.814133] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.223 [2024-06-10 11:02:49.814139] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.822444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.834119] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.834166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.834180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.223 [2024-06-10 11:02:49.834186] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.223 [2024-06-10 11:02:49.834192] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.842615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.854195] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.854238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.854252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.223 [2024-06-10 11:02:49.854259] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.223 [2024-06-10 11:02:49.854265] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.862641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.874220] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.874259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.874273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.223 [2024-06-10 11:02:49.874281] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.223 [2024-06-10 11:02:49.874287] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.882728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.894203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.894247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.894260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.223 [2024-06-10 11:02:49.894267] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.223 [2024-06-10 11:02:49.894274] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.902742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.914332] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.914375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.914389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.223 [2024-06-10 11:02:49.914396] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.223 [2024-06-10 11:02:49.914402] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.223 [2024-06-10 11:02:49.922868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.223 qpair failed and we were unable to recover it. 00:35:21.223 [2024-06-10 11:02:49.934443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.223 [2024-06-10 11:02:49.934480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.223 [2024-06-10 11:02:49.934494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.224 [2024-06-10 11:02:49.934501] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.224 [2024-06-10 11:02:49.934507] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.224 [2024-06-10 11:02:49.942892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.224 qpair failed and we were unable to recover it. 00:35:21.224 [2024-06-10 11:02:49.954471] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.224 [2024-06-10 11:02:49.954518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.224 [2024-06-10 11:02:49.954531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.224 [2024-06-10 11:02:49.954538] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.224 [2024-06-10 11:02:49.954544] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.224 [2024-06-10 11:02:49.962967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.224 qpair failed and we were unable to recover it. 00:35:21.224 [2024-06-10 11:02:49.974506] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.224 [2024-06-10 11:02:49.974550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.224 [2024-06-10 11:02:49.974563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.224 [2024-06-10 11:02:49.974570] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.224 [2024-06-10 11:02:49.974576] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.224 [2024-06-10 11:02:49.982991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.224 qpair failed and we were unable to recover it. 00:35:21.224 [2024-06-10 11:02:49.994512] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.224 [2024-06-10 11:02:49.994557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.224 [2024-06-10 11:02:49.994570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.224 [2024-06-10 11:02:49.994577] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.224 [2024-06-10 11:02:49.994583] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.224 [2024-06-10 11:02:50.002995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.224 qpair failed and we were unable to recover it. 00:35:21.224 [2024-06-10 11:02:50.014634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.224 [2024-06-10 11:02:50.014673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.224 [2024-06-10 11:02:50.014687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.224 [2024-06-10 11:02:50.014694] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.224 [2024-06-10 11:02:50.014700] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.481 [2024-06-10 11:02:50.023202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.481 qpair failed and we were unable to recover it. 00:35:21.481 [2024-06-10 11:02:50.034662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.481 [2024-06-10 11:02:50.034708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.481 [2024-06-10 11:02:50.034722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.481 [2024-06-10 11:02:50.034733] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.481 [2024-06-10 11:02:50.034740] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.043318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.054912] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.054961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.054975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.054982] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.054989] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.063303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.074851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.074894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.074908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.074915] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.074922] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.083375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.095131] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.095174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.095188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.095195] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.095201] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.103415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.115170] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.115205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.115219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.115226] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.115232] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.123510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.135275] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.135316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.135330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.135337] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.135343] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.143583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.155249] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.155295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.155309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.155316] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.155322] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.163641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.175343] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.175383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.175397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.175404] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.175410] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.183722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.195373] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.195409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.195423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.195430] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.195436] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.203763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.215448] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.215489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.215506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.215513] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.215519] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.223824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.235373] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.235412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.235425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.235432] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.235438] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.243860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.255468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.255510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.255524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.255531] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.255537] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.263864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.275478] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.275520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.275534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.275541] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.275547] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.482 [2024-06-10 11:02:50.284007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.482 qpair failed and we were unable to recover it. 00:35:21.482 [2024-06-10 11:02:50.295521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.482 [2024-06-10 11:02:50.295563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.482 [2024-06-10 11:02:50.295576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.482 [2024-06-10 11:02:50.295583] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.482 [2024-06-10 11:02:50.295590] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.483 [2024-06-10 11:02:50.304047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.483 qpair failed and we were unable to recover it. 00:35:21.483 [2024-06-10 11:02:50.315651] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.483 [2024-06-10 11:02:50.315696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.483 [2024-06-10 11:02:50.315710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.483 [2024-06-10 11:02:50.315716] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.483 [2024-06-10 11:02:50.315722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.483 [2024-06-10 11:02:50.324111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.483 qpair failed and we were unable to recover it. 00:35:21.483 [2024-06-10 11:02:50.335738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.483 [2024-06-10 11:02:50.335776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.483 [2024-06-10 11:02:50.335789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.483 [2024-06-10 11:02:50.335796] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.483 [2024-06-10 11:02:50.335803] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.483 [2024-06-10 11:02:50.344120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.483 qpair failed and we were unable to recover it. 00:35:21.483 [2024-06-10 11:02:50.355607] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.483 [2024-06-10 11:02:50.355648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.483 [2024-06-10 11:02:50.355661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.483 [2024-06-10 11:02:50.355668] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.483 [2024-06-10 11:02:50.355674] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.483 [2024-06-10 11:02:50.364099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.483 qpair failed and we were unable to recover it. 00:35:21.483 [2024-06-10 11:02:50.375762] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.483 [2024-06-10 11:02:50.375805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.483 [2024-06-10 11:02:50.375818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.483 [2024-06-10 11:02:50.375825] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.483 [2024-06-10 11:02:50.375831] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.483 [2024-06-10 11:02:50.384230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.483 qpair failed and we were unable to recover it. 00:35:21.483 [2024-06-10 11:02:50.395823] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:21.483 [2024-06-10 11:02:50.395871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:21.483 [2024-06-10 11:02:50.395884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:21.483 [2024-06-10 11:02:50.395891] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:21.483 [2024-06-10 11:02:50.395897] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:21.483 [2024-06-10 11:02:50.404303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:21.483 qpair failed and we were unable to recover it. 00:35:21.483 [2024-06-10 11:02:50.404395] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:21.483 A controller has encountered a failure and is being reset. 00:35:21.483 [2024-06-10 11:02:50.404503] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:21.483 [2024-06-10 11:02:50.404864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:35:21.483 Controller properly reset. 00:35:22.048 [2024-06-10 11:02:50.943969] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Write completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.048 Read completed with error (sct=0, sc=8) 00:35:22.048 starting I/O failed 00:35:22.049 Write completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 Write completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 Read completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 Write completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 Read completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 Read completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 Write completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 Write completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 Read completed with error (sct=0, sc=8) 00:35:22.049 starting I/O failed 00:35:22.049 [2024-06-10 11:02:50.944477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:22.614 [2024-06-10 11:02:51.520981] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:22.614 Read completed with error (sct=0, sc=8) 00:35:22.614 starting I/O failed 00:35:22.614 Read completed with error (sct=0, sc=8) 00:35:22.614 starting I/O failed 00:35:22.614 Read completed with error (sct=0, sc=8) 00:35:22.614 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Read completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 Write completed with error (sct=0, sc=8) 00:35:22.615 starting I/O failed 00:35:22.615 [2024-06-10 11:02:51.521491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:23.182 [2024-06-10 11:02:52.095967] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:23.182 Read completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Read completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Read completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Read completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Read completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Read completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Read completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.182 Write completed with error (sct=0, sc=8) 00:35:23.182 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Write completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Write completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Write completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Write completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Write completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 Read completed with error (sct=0, sc=8) 00:35:23.183 starting I/O failed 00:35:23.183 [2024-06-10 11:02:52.096478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:23.183 Initializing NVMe Controllers 00:35:23.183 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:23.183 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:23.183 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:23.183 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:23.183 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:23.183 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:23.183 Initialization complete. Launching workers. 00:35:23.183 Starting thread on core 1 00:35:23.183 Starting thread on core 2 00:35:23.183 Starting thread on core 3 00:35:23.183 Starting thread on core 0 00:35:23.183 11:02:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:23.183 00:35:23.183 real 0m12.949s 00:35:23.183 user 0m27.389s 00:35:23.183 sys 0m3.067s 00:35:23.183 11:02:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:23.183 11:02:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:23.183 ************************************ 00:35:23.183 END TEST nvmf_target_disconnect_tc2 00:35:23.183 ************************************ 00:35:23.183 11:02:52 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:35:23.183 11:02:52 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:35:23.183 11:02:52 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:35:23.183 11:02:52 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:23.183 11:02:52 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:23.440 ************************************ 00:35:23.440 START TEST nvmf_target_disconnect_tc3 00:35:23.440 ************************************ 00:35:23.440 11:02:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc3 00:35:23.440 11:02:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=115112 00:35:23.440 11:02:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:35:23.440 11:02:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:35:23.440 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.369 11:02:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 113934 00:35:25.369 11:02:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:35:25.937 [2024-06-10 11:02:54.911968] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Read completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 Write completed with error (sct=0, sc=8) 00:35:25.937 starting I/O failed 00:35:25.937 [2024-06-10 11:02:54.912599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:25.937 [2024-06-10 11:02:54.914545] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:25.937 [2024-06-10 11:02:54.914561] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:25.937 [2024-06-10 11:02:54.914567] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:27.314 [2024-06-10 11:02:55.917313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:27.314 qpair failed and we were unable to recover it. 00:35:27.314 [2024-06-10 11:02:55.919171] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:27.314 [2024-06-10 11:02:55.919187] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:27.314 [2024-06-10 11:02:55.919193] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:27.314 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 113934 Killed "${NVMF_APP[@]}" "$@" 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=115792 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 115792 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@830 -- # '[' -z 115792 ']' 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:27.314 11:02:56 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:27.314 [2024-06-10 11:02:56.278473] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:27.314 [2024-06-10 11:02:56.278519] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.314 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.572 [2024-06-10 11:02:56.353178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:27.572 [2024-06-10 11:02:56.424106] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.572 [2024-06-10 11:02:56.424148] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.572 [2024-06-10 11:02:56.424155] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.572 [2024-06-10 11:02:56.424161] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.572 [2024-06-10 11:02:56.424166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.572 [2024-06-10 11:02:56.424279] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:35:27.572 [2024-06-10 11:02:56.424389] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:35:27.572 [2024-06-10 11:02:56.424495] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:35:27.572 [2024-06-10 11:02:56.424496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:35:28.140 [2024-06-10 11:02:56.921872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:28.140 qpair failed and we were unable to recover it. 00:35:28.140 [2024-06-10 11:02:56.923741] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:28.140 [2024-06-10 11:02:56.923759] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:28.140 [2024-06-10 11:02:56.923766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@863 -- # return 0 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:28.140 Malloc0 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:28.140 [2024-06-10 11:02:57.155991] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x24a0990/0x249ffd0) succeed. 00:35:28.140 [2024-06-10 11:02:57.165246] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x24a1d40/0x24a0550) succeed. 00:35:28.140 [2024-06-10 11:02:57.165268] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:35:28.140 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:28.400 [2024-06-10 11:02:57.193543] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.400 11:02:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 115112 00:35:28.968 [2024-06-10 11:02:57.926449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:28.968 qpair failed and we were unable to recover it. 00:35:28.968 [2024-06-10 11:02:57.928404] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:28.968 [2024-06-10 11:02:57.928420] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:28.968 [2024-06-10 11:02:57.928427] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:29.904 [2024-06-10 11:02:58.931088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:29.904 qpair failed and we were unable to recover it. 00:35:29.904 [2024-06-10 11:02:58.932841] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:29.904 [2024-06-10 11:02:58.932857] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:29.904 [2024-06-10 11:02:58.932863] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:31.280 [2024-06-10 11:02:59.935464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-06-10 11:02:59.937194] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:31.280 [2024-06-10 11:02:59.937208] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:31.280 [2024-06-10 11:02:59.937213] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:32.215 [2024-06-10 11:03:00.939848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:32.215 qpair failed and we were unable to recover it. 00:35:32.215 [2024-06-10 11:03:00.941717] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:32.215 [2024-06-10 11:03:00.941734] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:32.215 [2024-06-10 11:03:00.941741] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:33.152 [2024-06-10 11:03:01.944451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-06-10 11:03:01.946179] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:33.152 [2024-06-10 11:03:01.946194] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:33.152 [2024-06-10 11:03:01.946200] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:34.089 [2024-06-10 11:03:02.948905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:34.089 qpair failed and we were unable to recover it. 00:35:34.089 [2024-06-10 11:03:02.950698] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:34.089 [2024-06-10 11:03:02.950713] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:34.089 [2024-06-10 11:03:02.950719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:35:35.025 [2024-06-10 11:03:03.953311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:35.025 qpair failed and we were unable to recover it. 00:35:35.593 [2024-06-10 11:03:04.512972] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Read completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 Write completed with error (sct=0, sc=8) 00:35:35.593 starting I/O failed 00:35:35.593 [2024-06-10 11:03:04.513362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:35.593 [2024-06-10 11:03:04.515121] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:35.593 [2024-06-10 11:03:04.515136] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:35.593 [2024-06-10 11:03:04.515142] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:35:36.530 [2024-06-10 11:03:05.517848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:36.530 qpair failed and we were unable to recover it. 00:35:36.530 [2024-06-10 11:03:05.519572] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:36.530 [2024-06-10 11:03:05.519587] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:36.530 [2024-06-10 11:03:05.519593] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:35:37.908 [2024-06-10 11:03:06.522253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:37.908 qpair failed and we were unable to recover it. 00:35:38.167 [2024-06-10 11:03:07.071969] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Write completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 Read completed with error (sct=0, sc=8) 00:35:38.167 starting I/O failed 00:35:38.167 [2024-06-10 11:03:07.072367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:38.167 [2024-06-10 11:03:07.074193] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:38.167 [2024-06-10 11:03:07.074208] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:38.167 [2024-06-10 11:03:07.074215] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:39.104 [2024-06-10 11:03:08.076871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:39.104 qpair failed and we were unable to recover it. 00:35:39.104 [2024-06-10 11:03:08.078697] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:39.104 [2024-06-10 11:03:08.078712] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:39.104 [2024-06-10 11:03:08.078718] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:35:40.481 [2024-06-10 11:03:09.081330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:40.481 qpair failed and we were unable to recover it. 00:35:40.481 [2024-06-10 11:03:09.081429] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:40.481 A controller has encountered a failure and is being reset. 00:35:40.481 Resorting to new failover address 192.168.100.9 00:35:40.738 [2024-06-10 11:03:09.632970] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Write completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 Read completed with error (sct=0, sc=8) 00:35:40.738 starting I/O failed 00:35:40.738 [2024-06-10 11:03:09.633371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:40.738 [2024-06-10 11:03:09.635132] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:40.738 [2024-06-10 11:03:09.635148] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:40.738 [2024-06-10 11:03:09.635155] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:41.669 [2024-06-10 11:03:10.637817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:41.669 qpair failed and we were unable to recover it. 00:35:41.669 [2024-06-10 11:03:10.639647] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:41.669 [2024-06-10 11:03:10.639662] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:41.669 [2024-06-10 11:03:10.639668] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:35:42.640 [2024-06-10 11:03:11.642380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:42.640 qpair failed and we were unable to recover it. 00:35:42.640 [2024-06-10 11:03:11.642448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.640 [2024-06-10 11:03:11.642541] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:35:42.900 [2024-06-10 11:03:11.673320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:35:42.900 Controller properly reset. 00:35:42.900 Initializing NVMe Controllers 00:35:42.900 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:42.900 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:42.900 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:42.900 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:42.900 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:42.900 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:42.900 Initialization complete. Launching workers. 00:35:42.900 Starting thread on core 1 00:35:42.900 Starting thread on core 2 00:35:42.900 Starting thread on core 3 00:35:42.900 Starting thread on core 0 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:35:42.900 00:35:42.900 real 0m19.505s 00:35:42.900 user 1m0.187s 00:35:42.900 sys 0m4.920s 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:35:42.900 ************************************ 00:35:42.900 END TEST nvmf_target_disconnect_tc3 00:35:42.900 ************************************ 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:42.900 rmmod nvme_rdma 00:35:42.900 rmmod nvme_fabrics 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 115792 ']' 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 115792 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 115792 ']' 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 115792 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 115792 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 115792' 00:35:42.900 killing process with pid 115792 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 115792 00:35:42.900 11:03:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 115792 00:35:43.159 11:03:12 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:43.159 11:03:12 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:43.159 00:35:43.159 real 0m40.028s 00:35:43.159 user 2m32.526s 00:35:43.159 sys 0m12.843s 00:35:43.159 11:03:12 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:43.159 11:03:12 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:43.159 ************************************ 00:35:43.159 END TEST nvmf_target_disconnect 00:35:43.159 ************************************ 00:35:43.159 11:03:12 nvmf_rdma -- nvmf/nvmf.sh@125 -- # timing_exit host 00:35:43.159 11:03:12 nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:43.159 11:03:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:43.159 11:03:12 nvmf_rdma -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:43.159 00:35:43.159 real 29m1.310s 00:35:43.159 user 84m35.871s 00:35:43.159 sys 5m36.993s 00:35:43.159 11:03:12 nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:43.159 11:03:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:43.159 ************************************ 00:35:43.159 END TEST nvmf_rdma 00:35:43.159 ************************************ 00:35:43.159 11:03:12 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:35:43.159 11:03:12 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:43.159 11:03:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:43.159 11:03:12 -- common/autotest_common.sh@10 -- # set +x 00:35:43.416 ************************************ 00:35:43.416 START TEST spdkcli_nvmf_rdma 00:35:43.416 ************************************ 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:35:43.416 * Looking for test storage... 00:35:43.416 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=118909 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 118909 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@830 -- # '[' -z 118909 ']' 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:43.416 11:03:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:43.416 [2024-06-10 11:03:12.364868] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:35:43.416 [2024-06-10 11:03:12.364919] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118909 ] 00:35:43.416 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.416 [2024-06-10 11:03:12.423829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:43.674 [2024-06-10 11:03:12.502136] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.674 [2024-06-10 11:03:12.502139] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@863 -- # return 0 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:35:44.239 11:03:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:50.807 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:50.807 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # modinfo irdma 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:50.807 Found net devices under 0000:af:00.0: cvl_0_0 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:50.807 Found net devices under 0000:af:00.1: cvl_0_1 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo cvl_0_0 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo cvl_0_1 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:50.807 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:35:50.808 20: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:35:50.808 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:35:50.808 altname enp175s0f0np0 00:35:50.808 altname ens801f0np0 00:35:50.808 inet 192.168.100.8/24 scope global cvl_0_0 00:35:50.808 valid_lft forever preferred_lft forever 00:35:50.808 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:35:50.808 valid_lft forever preferred_lft forever 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:35:50.808 21: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:35:50.808 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:35:50.808 altname enp175s0f1np1 00:35:50.808 altname ens801f1np1 00:35:50.808 inet 192.168.100.9/24 scope global cvl_0_1 00:35:50.808 valid_lft forever preferred_lft forever 00:35:50.808 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:35:50.808 valid_lft forever preferred_lft forever 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo cvl_0_0 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo cvl_0_1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:50.808 192.168.100.9' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:50.808 192.168.100.9' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:50.808 192.168.100.9' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:50.808 11:03:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:50.808 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:50.808 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:50.808 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:50.808 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:50.808 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:50.808 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:50.808 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:50.808 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:50.808 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:50.808 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:50.808 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:50.808 ' 00:35:53.342 [2024-06-10 11:03:21.767059] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1fbaec0/0x1fba500) succeed. 00:35:53.342 [2024-06-10 11:03:21.776228] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1fbc2f0/0x1fbaa80) succeed. 00:35:53.342 [2024-06-10 11:03:21.776250] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:35:53.342 [2024-06-10 11:03:21.777951] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/1535 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:35:53.342 [2024-06-10 11:03:21.777982] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:35:53.342 [2024-06-10 11:03:21.778995] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:35:53.342 [2024-06-10 11:03:21.780487] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/1535 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:35:53.342 [2024-06-10 11:03:21.780500] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:35:53.342 [2024-06-10 11:03:21.781509] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:35:54.277 [2024-06-10 11:03:22.953511] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:35:56.181 [2024-06-10 11:03:25.116616] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:35:58.086 [2024-06-10 11:03:26.974758] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:35:59.464 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:59.464 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:59.464 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:59.464 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:59.464 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:59.464 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:59.464 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:59.464 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:59.464 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:59.464 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:59.464 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:59.464 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:59.723 11:03:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:59.723 11:03:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:59.723 11:03:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:59.723 11:03:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:59.723 11:03:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:59.723 11:03:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:59.723 11:03:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:35:59.723 11:03:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:59.981 11:03:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:59.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:59.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:59.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:59.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:35:59.981 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:35:59.981 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:59.981 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:59.981 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:59.981 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:59.981 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:59.981 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:59.981 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:59.981 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:59.981 ' 00:36:05.267 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:05.267 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:05.267 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:05.267 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:05.267 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:36:05.267 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:36:05.267 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:05.267 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:05.267 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:05.267 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:05.267 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:05.267 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:05.267 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:05.267 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 118909 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@949 -- # '[' -z 118909 ']' 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # kill -0 118909 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # uname 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 118909 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # echo 'killing process with pid 118909' 00:36:05.267 killing process with pid 118909 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # kill 118909 00:36:05.267 11:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # wait 118909 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:36:05.267 rmmod nvme_rdma 00:36:05.267 rmmod nvme_fabrics 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:05.267 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:36:05.268 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:36:05.268 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:36:05.268 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:05.268 11:03:34 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:36:05.268 00:36:05.268 real 0m22.053s 00:36:05.268 user 0m46.756s 00:36:05.268 sys 0m5.351s 00:36:05.268 11:03:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:05.268 11:03:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:05.268 ************************************ 00:36:05.268 END TEST spdkcli_nvmf_rdma 00:36:05.268 ************************************ 00:36:05.268 11:03:34 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:05.268 11:03:34 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:05.268 11:03:34 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:05.268 11:03:34 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:05.268 11:03:34 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:05.268 11:03:34 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:05.268 11:03:34 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:05.268 11:03:34 -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:05.268 11:03:34 -- common/autotest_common.sh@10 -- # set +x 00:36:05.526 11:03:34 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:05.526 11:03:34 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:36:05.526 11:03:34 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:36:05.526 11:03:34 -- common/autotest_common.sh@10 -- # set +x 00:36:09.748 INFO: APP EXITING 00:36:09.748 INFO: killing all VMs 00:36:09.748 INFO: killing vhost app 00:36:09.748 INFO: EXIT DONE 00:36:11.650 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:11.650 Waiting for block devices as requested 00:36:11.908 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:11.908 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:11.908 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:12.166 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:12.166 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:12.166 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.166 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.424 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.424 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.424 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:12.682 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:12.682 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:12.682 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:12.682 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.940 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.940 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.940 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:16.239 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:16.239 Cleaning 00:36:16.239 Removing: /var/run/dpdk/spdk0/config 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:16.239 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:16.239 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:16.239 Removing: /var/run/dpdk/spdk1/config 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:16.239 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:16.239 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:16.239 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:16.240 Removing: /var/run/dpdk/spdk2/config 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:16.240 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:16.240 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:16.240 Removing: /var/run/dpdk/spdk3/config 00:36:16.240 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:16.498 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:16.498 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:16.498 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:16.498 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:16.498 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:16.498 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:16.498 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:16.498 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:16.498 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:16.498 Removing: /var/run/dpdk/spdk4/config 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:16.498 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:16.498 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:16.498 Removing: /dev/shm/bdevperf_trace.pid19078 00:36:16.498 Removing: /dev/shm/bdev_svc_trace.1 00:36:16.498 Removing: /dev/shm/nvmf_trace.0 00:36:16.498 Removing: /dev/shm/spdk_tgt_trace.pid3880381 00:36:16.498 Removing: /var/run/dpdk/spdk0 00:36:16.498 Removing: /var/run/dpdk/spdk1 00:36:16.498 Removing: /var/run/dpdk/spdk2 00:36:16.498 Removing: /var/run/dpdk/spdk3 00:36:16.498 Removing: /var/run/dpdk/spdk4 00:36:16.498 Removing: /var/run/dpdk/spdk_pid106698 00:36:16.498 Removing: /var/run/dpdk/spdk_pid106935 00:36:16.498 Removing: /var/run/dpdk/spdk_pid112774 00:36:16.498 Removing: /var/run/dpdk/spdk_pid113259 00:36:16.498 Removing: /var/run/dpdk/spdk_pid115112 00:36:16.498 Removing: /var/run/dpdk/spdk_pid118909 00:36:16.498 Removing: /var/run/dpdk/spdk_pid17175 00:36:16.498 Removing: /var/run/dpdk/spdk_pid18008 00:36:16.498 Removing: /var/run/dpdk/spdk_pid19078 00:36:16.498 Removing: /var/run/dpdk/spdk_pid23225 00:36:16.498 Removing: /var/run/dpdk/spdk_pid30608 00:36:16.498 Removing: /var/run/dpdk/spdk_pid31598 00:36:16.498 Removing: /var/run/dpdk/spdk_pid32881 00:36:16.498 Removing: /var/run/dpdk/spdk_pid33787 00:36:16.498 Removing: /var/run/dpdk/spdk_pid34024 00:36:16.498 Removing: /var/run/dpdk/spdk_pid38510 00:36:16.498 Removing: /var/run/dpdk/spdk_pid38591 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3878266 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3879323 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3880381 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3881007 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3881942 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3882181 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3883139 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3883365 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3883630 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3888698 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3889945 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3890223 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3890510 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3890812 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3891095 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3891345 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3891597 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3891866 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3892903 00:36:16.498 Removing: /var/run/dpdk/spdk_pid3896282 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3896540 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3896799 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3896818 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3897299 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3897396 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3897796 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3898023 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3898280 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3898509 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3898615 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3898781 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3899326 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3899570 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3899860 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3900125 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3900150 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3900219 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3900471 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3900722 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3900991 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3901245 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3901517 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3901769 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3902019 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3902272 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3902536 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3902787 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3903043 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3903302 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3903572 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3903830 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3904088 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3904358 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3904621 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3904877 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3905119 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3905368 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3905440 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3905746 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3909887 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3995635 00:36:16.757 Removing: /var/run/dpdk/spdk_pid3999953 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4010102 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4015492 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4019405 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4020442 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4035309 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4035693 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4039958 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4045817 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4048376 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4058599 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4083458 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4087239 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4133178 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4138215 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4169517 00:36:16.757 Removing: /var/run/dpdk/spdk_pid4185163 00:36:16.757 Removing: /var/run/dpdk/spdk_pid43203 00:36:16.757 Removing: /var/run/dpdk/spdk_pid43659 00:36:16.757 Removing: /var/run/dpdk/spdk_pid44248 00:36:16.757 Removing: /var/run/dpdk/spdk_pid44336 00:36:16.757 Removing: /var/run/dpdk/spdk_pid45942 00:36:16.757 Removing: /var/run/dpdk/spdk_pid47739 00:36:16.757 Removing: /var/run/dpdk/spdk_pid49535 00:36:17.015 Removing: /var/run/dpdk/spdk_pid51334 00:36:17.015 Removing: /var/run/dpdk/spdk_pid53114 00:36:17.015 Removing: /var/run/dpdk/spdk_pid54844 00:36:17.015 Removing: /var/run/dpdk/spdk_pid60814 00:36:17.015 Removing: /var/run/dpdk/spdk_pid61377 00:36:17.015 Removing: /var/run/dpdk/spdk_pid63099 00:36:17.015 Removing: /var/run/dpdk/spdk_pid64125 00:36:17.015 Removing: /var/run/dpdk/spdk_pid69633 00:36:17.015 Removing: /var/run/dpdk/spdk_pid72842 00:36:17.015 Removing: /var/run/dpdk/spdk_pid78501 00:36:17.015 Removing: /var/run/dpdk/spdk_pid88245 00:36:17.015 Removing: /var/run/dpdk/spdk_pid88254 00:36:17.015 Clean 00:36:17.015 11:03:45 -- common/autotest_common.sh@1450 -- # return 0 00:36:17.015 11:03:45 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:17.015 11:03:45 -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:17.015 11:03:45 -- common/autotest_common.sh@10 -- # set +x 00:36:17.015 11:03:45 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:17.015 11:03:45 -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:17.015 11:03:45 -- common/autotest_common.sh@10 -- # set +x 00:36:17.015 11:03:45 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt 00:36:17.015 11:03:45 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/udev.log ]] 00:36:17.015 11:03:45 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/udev.log 00:36:17.015 11:03:45 -- spdk/autotest.sh@391 -- # hash lcov 00:36:17.015 11:03:45 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:17.015 11:03:45 -- spdk/autotest.sh@393 -- # hostname 00:36:17.016 11:03:45 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_test.info 00:36:17.274 geninfo: WARNING: invalid characters removed from testname! 00:36:39.184 11:04:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:39.184 11:04:06 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:39.749 11:04:08 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:41.649 11:04:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:43.062 11:04:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:44.962 11:04:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_total.info 00:36:46.863 11:04:15 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:46.863 11:04:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:36:46.863 11:04:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:46.863 11:04:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.863 11:04:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.863 11:04:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.863 11:04:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.863 11:04:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.863 11:04:15 -- paths/export.sh@5 -- $ export PATH 00:36:46.863 11:04:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.863 11:04:15 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:36:46.863 11:04:15 -- common/autobuild_common.sh@437 -- $ date +%s 00:36:46.863 11:04:15 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718010255.XXXXXX 00:36:46.863 11:04:15 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718010255.o9gkmK 00:36:46.863 11:04:15 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:36:46.863 11:04:15 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:36:46.863 11:04:15 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/' 00:36:46.863 11:04:15 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:46.863 11:04:15 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:46.863 11:04:15 -- common/autobuild_common.sh@453 -- $ get_config_params 00:36:46.863 11:04:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:36:46.863 11:04:15 -- common/autotest_common.sh@10 -- $ set +x 00:36:46.863 11:04:15 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:36:46.863 11:04:15 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:36:46.863 11:04:15 -- pm/common@17 -- $ local monitor 00:36:46.863 11:04:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.863 11:04:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.863 11:04:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.863 11:04:15 -- pm/common@21 -- $ date +%s 00:36:46.863 11:04:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:46.863 11:04:15 -- pm/common@21 -- $ date +%s 00:36:46.863 11:04:15 -- pm/common@21 -- $ date +%s 00:36:46.863 11:04:15 -- pm/common@25 -- $ sleep 1 00:36:46.863 11:04:15 -- pm/common@21 -- $ date +%s 00:36:46.863 11:04:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718010255 00:36:46.863 11:04:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718010255 00:36:46.863 11:04:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718010255 00:36:46.863 11:04:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718010255 00:36:46.863 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718010255_collect-cpu-temp.pm.log 00:36:46.863 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718010255_collect-vmstat.pm.log 00:36:46.863 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718010255_collect-cpu-load.pm.log 00:36:46.863 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718010255_collect-bmc-pm.bmc.pm.log 00:36:47.799 11:04:16 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:36:47.800 11:04:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:36:47.800 11:04:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:36:47.800 11:04:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:47.800 11:04:16 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:47.800 11:04:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:47.800 11:04:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:47.800 11:04:16 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:47.800 11:04:16 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:47.800 11:04:16 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/timing.txt 00:36:47.800 11:04:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:47.800 11:04:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:47.800 11:04:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:47.800 11:04:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:47.800 11:04:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:47.800 11:04:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:47.800 11:04:16 -- pm/common@44 -- $ pid=134960 00:36:47.800 11:04:16 -- pm/common@50 -- $ kill -TERM 134960 00:36:47.800 11:04:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:47.800 11:04:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:47.800 11:04:16 -- pm/common@44 -- $ pid=134962 00:36:47.800 11:04:16 -- pm/common@50 -- $ kill -TERM 134962 00:36:47.800 11:04:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:47.800 11:04:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:47.800 11:04:16 -- pm/common@44 -- $ pid=134964 00:36:47.800 11:04:16 -- pm/common@50 -- $ kill -TERM 134964 00:36:47.800 11:04:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:47.800 11:04:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:47.800 11:04:16 -- pm/common@44 -- $ pid=134990 00:36:47.800 11:04:16 -- pm/common@50 -- $ sudo -E kill -TERM 134990 00:36:47.800 + [[ -n 3771570 ]] 00:36:47.800 + sudo kill 3771570 00:36:47.810 [Pipeline] } 00:36:47.830 [Pipeline] // stage 00:36:47.836 [Pipeline] } 00:36:47.853 [Pipeline] // timeout 00:36:47.858 [Pipeline] } 00:36:47.872 [Pipeline] // catchError 00:36:47.877 [Pipeline] } 00:36:47.894 [Pipeline] // wrap 00:36:47.900 [Pipeline] } 00:36:47.917 [Pipeline] // catchError 00:36:47.926 [Pipeline] stage 00:36:47.928 [Pipeline] { (Epilogue) 00:36:47.943 [Pipeline] catchError 00:36:47.945 [Pipeline] { 00:36:47.961 [Pipeline] echo 00:36:47.962 Cleanup processes 00:36:47.969 [Pipeline] sh 00:36:48.254 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:36:48.254 135101 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/sdr.cache 00:36:48.254 135356 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:36:48.270 [Pipeline] sh 00:36:48.553 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:36:48.553 ++ grep -v 'sudo pgrep' 00:36:48.553 ++ awk '{print $1}' 00:36:48.553 + sudo kill -9 135101 00:36:48.565 [Pipeline] sh 00:36:48.848 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:58.852 [Pipeline] sh 00:36:59.137 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:59.137 Artifacts sizes are good 00:36:59.152 [Pipeline] archiveArtifacts 00:36:59.158 Archiving artifacts 00:36:59.364 [Pipeline] sh 00:36:59.648 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:36:59.662 [Pipeline] cleanWs 00:36:59.672 [WS-CLEANUP] Deleting project workspace... 00:36:59.672 [WS-CLEANUP] Deferred wipeout is used... 00:36:59.679 [WS-CLEANUP] done 00:36:59.681 [Pipeline] } 00:36:59.703 [Pipeline] // catchError 00:36:59.717 [Pipeline] sh 00:36:59.998 + logger -p user.info -t JENKINS-CI 00:37:00.007 [Pipeline] } 00:37:00.024 [Pipeline] // stage 00:37:00.030 [Pipeline] } 00:37:00.048 [Pipeline] // node 00:37:00.054 [Pipeline] End of Pipeline 00:37:00.088 Finished: SUCCESS